
Microsoft Blames the Messengers 731
Roger writes: "In an essay published on microsoft.com, Scott Culp, Manager of the Microsoft Security Response Center, calls on security experts to "end information anarchy" and stop releasing sample code that exploits security holes in Windows and other operating systems. "It's high time the security community stopped providing the blueprints for building these weapons," Culp writes in the essay. "And it's high time that computer users insisted that the security community live up to its obligation to protect them." See the story on Cnet News.com."
MS (Score:4, Offtopic)
It's probably high time that Microsoft stop building houses made of straw to defend against big bad 'net wolves... It'd sure make a lot of our lives easier...
Some other choice quotes : (Score:5, Insightful)
All three goals? There's some on this later - but assuming that he's right with the rest of the entire essay, you'd expect there to be some pressure to address the vulnerabilities, would there not? He even goes further, saying that pulished exploits are antithetical to getting patches out. Brilliant logic.
Providing a recipe for exploiting a vulnerability doesn't aid administrators in protecting their networks. In the vast majority of cases, the only way to protect against a security vulnerability is to apply a fix that changes the system behavior and eliminates the vulnerability; in other cases, systems can be protected through administrative procedures. But regardless of whether the remediation takes the form of a patch or a workaround, an administrator doesn't need to know how a vulnerability works in order to understand how to protect against it, any more than a person needs to know how to cause a headache in order to take an aspirin.
I love this analogy. It actually works. For example - if I knew that the cause of my headaches was an allergy to certain foods, I could avoid those foods, and not have to take aspirin. If I know how an exploit works, I can prevent it with my own tools - firewall, etc. and not have to worry too much about the dubious patches.
Likewise, if information anarchy is intended to spur users into defending their systems, the worms themselves conclusively show that it fails to do this. Long before the worms were built, vendors had delivered security patches that eliminated the vulnerabilities.
Here he's not talking about e-mail "viruses", but worms. Specifically, worms targetting systems people did not know they had on their system. There was plenty of buzz about Code Red before most people had it, and the patch was applied to thousands of computers as people got worried. I'm not an advocate of having people upgrade through fear, but this still disproves his point.
Now - here's his reason for published exploits to take pressure off of vendors to publish fixes :
Finally, information anarchy threatens to undo much of the progress made in recent years with regard to encouraging vendors to openly address security vulnerabilities. At the end of the day, a vendor's paramount responsibility is to its customers, not to a self-described security community. If openly addressing vulnerabilities inevitably leads to those vulnerabilities being exploited, vendors will have no choice but to find other ways to protect their customers.
Crap...I'm trying to find a problem with the logic, but I can't actually understand the argument - anyone? What other ways are there for vendors to protect their customers than put out fixes?
Anyway, that said, I'd just like to express my condolences to the author. Did you see his title? "Manager of Microsoft Security Response Center" Poor guy is probably blamed for half the bugs in code he's never heard of. Can blame him for venting a little. I just wouldn't have done it as publicly.
Re:Some other choice quotes : (Score:5, Interesting)
I love this analogy. It actually works.
No, actually it doesnt.
An asprin only relieves the symptom, not the cause. If you get a headache from hitting your head against the wall, an asprin won't stop you from continuing to hit your head against the wall, all it will do is let you do it longer.
Perhaps he can answer this though: without exploit code, how do we know the problem is really fixed? Twice to my knowedge MS has released patches that didn't fix the hole they claimed. Publicly available exploits are a failsafe, they provide an independant means of verifying that the hole is actually closed.
Re:Some other choice quotes : (Score:3, Interesting)
I think that is the single most important reason for exploit code.
I read one of the new (yes, I know, the old were much better) Tom Swift books where Tom invents some sort of magical force field and, as the acid test, he makes his robot assistant fire a few rounds at him. Of course, it's dangerous to fire a gun at a person, but other than proving its effectiveness beyond any reasonable doubt by examining the mechanism behind the force field (akin to studying the source code in detail, which, since it isn't open to the public, isn't open to scrutiny) there is no other final way of determining that something works other than trying it.
If Microsoft is going to be a closed-source software industry, they're going to have to accept the consequences of their decisions. They have to take full responsibility for their own code. Blaming their problems on something else does not eradicate them.
A possible response (Score:3, Interesting)
If I was a MS spokeman, I might answer this by saying:
"Exploits are a proper test of the validity of a patch, but it is not necessary to publish them. They can be developed and tested in closed labs and only the results published."
To which I would have to ask: "Whose lab and how can we trust them?"
Re:Some other choice quotes : (Score:4, Informative)
No, actually, it's a direct side effect of the C standard libraries. Things like strcpy, strcat, sprintf... all of these are buffer overflows waiting to happen.
For example, there's a buffer overflow (probably unintentional... unless you're a conspiracy theorist like yourself) just waiting for someone to exploit it in the Mozilla image handling code. Just imagine; a linux virus that spreads by someone sending a carefully crafted image file to your system. Everything would look fine on the surface; but that image file contains compressed code that expands in such a way that it causes a buffer overflow.
... or are you saying that the Mozilla coders intended it to be a security hole?
Simon
Re:Some other choice quotes : (Score:3, Insightful)
When all else fails, litigate (Score:3, Interesting)
Considering that this essay is from Microsoft, I think it reads clearly as a thinly veiled threat to sue anyone who points out vulnerabilities in Microsoft products (UCITA, anyone?). In Microsoft logic, if people stop publishing vulnerabilities for fear of being sued, then the problem of people exploiting known vulnerabilities goes away. This logic is akin to leaving a bank vault wide open, but turning off the lights so thieves won't see it.
In the land of real people, litigation will not solve the problem, and Microsoft needs to know this. The first security expert to get sued will be screwed, but by that time the vulnerability will have been made public, and thus be exploitable. This lawsuit will leave a bad taste in the mouths of the "self-described security community," so that the next exploit that is found will be exploited rather than published. When people start abandoning their products en masse because of constant security problems, Microsoft may realize that they shouldn't've angered the people who point out the chinks in their armor.
Re:MS FUD (Score:3, Interesting)
Code Red. Lion. Sadmind. Ramen. Nimda. In the past year, computer worms with these names have attacked computer networks around the world, causing billions of dollars of damage. They paralyzed computer networks, destroyed data, and in some cases left infected computers vulnerable to future attacks
then further down -
All of these worms made use of security flaws in the systems they attacked, and if there hadn't been security vulnerabilities in Windows®, Linux, and Solaris®, none of them could have been written. This is a true statement, but it doesn't bring us any closer to a solution.
Basically they are attempting to put Solaris and Linux in the same boat as M$ware, it looks like the author Scott Culp hasnt met his quarterly quota for marketing FUD and so has thrown that *cough* article together to make up for it.
Re:MS (Score:3, Insightful)
One would argue that a decent MS admin would remember to keep the door locked.
Re:MS (Score:3, Interesting)
"If someone breaks into your house because you had a lock that could be bypassed with a special lockpick, it's not the lockmaker's fault, but the fault of whoever it was that gave you the special lockpick"
I disagree.
When I buy a lock, I expect it to be secure, and I expect that the manufacturer has tested the lock against most common circumvention methods. I would be damned pissed off if my lock were openable by using any old key blank.
Similarly, when I buy server software, I expect it to hold up against point-blank buffer overflows and backdoors/side effects so large you could drive a truck through. I mean jesus, I can get free software where the authors have spent more time making sure that stupid shit doesn't get through. Some code monkey getting paid $x/hr should at least have a monetary incentive to check over the code, shouldn't they??
Or let's take a look from a different angle. I pay money for software. If it costs me money and time when it falls down, I expect to be able to get money out of the manufacturer or at least get timely fixes or decent technical support. What am I paying them for anyway?
Re:MS (Score:3, Insightful)
I would relate M$ to the builder, and the locksmith to the security boards.
Security Through Obscurity (Score:2, Funny)
So basically... (Score:5, Funny)
Re:So basically... (Score:4, Funny)
When you point the finger of blame... (Score:2, Insightful)
I think the author/Microsoft should not forget this.
Moose
Right (Score:5, Informative)
Much better that the "black-hats" "secretly" circulate the information.
</sarcasm>
If the security experts didn't find and pubilsh the holes, good luck on Microsoft making the fixes a "priority".
history (Score:5, Informative)
Re:history (Score:2, Insightful)
just my $.02
They Have a Point (Score:2, Funny)
Re:They Have a Point (Score:2, Insightful)
Re:They Have a Point (Score:5, Insightful)
By not giving exploit scripts you allow sysadmins to become lazy. They figure "Nah, i'll just wait until an exploit comes out before i patch it", while the underground hax0r scene is already searching out your box.
Re:They Have a Point (Score:5, Insightful)
What makes you think that not having it displayed all over the web will make it any less available to to the people who want to do harm?
Black hats are going to get ahold of the exploit, even if the source code to it is not published on incidents.org or bugtraq. All that not publishing it there does is provide a false sense of security.
Publishing the details in a high-visibility location does several things:
The script kiddiez are going to get these exploits when they download them from their favourite r00t kit location. Lets not pretend that not publishing the same exploits to the general public really makes things much safer.
Re:They Have a Point (Score:3, Insightful)
The problem with not publishing details of the exploit is that Microsoft and other companies will look at it and say "This doesn't look like that bad of a problem, and besides, nobody will find that easily. No sense in making a patch for it. The potential abuse of this hole is negligable."
So then we end up being at the mercy of the Black Hats to quietly spread the information among themselves.
No, keeping things secret simply won't help.
Re:They Have a Point (Score:5, Informative)
1) The source display should allow any administrator to verify if he is vulnerable, and, after patching, that he is no longer vulnerable.
2) The source code should demonstrate the exact nature of the problem for the coders who wish to fix it. They would otherwise need to write their own exploit to test their fixes.
3) The source code should apply pressure to the software maker. It is akin to being flogged in public. The whole world knows you are vulnerable, and you ought to fix it.
4) The source code of the exploit should make the exploit obvious but not damage the system.
Source code exploits will ALWAYS be published in places where some crackers can get them. The challenge is designing an updating system that allows all users to apply patches in a timely fashion. I think Debian is actually closest on this one.
Microsoft is really going to get nowhere on this one. I've read accounts of people who send exploits to Microsoft in secrecy, and then HAVE to publish the code so that Microsoft is forced to fix the problem. If it doesn't impact Microsoft's marketing, Microsoft doesn't care.
The other issue that relates to this one is secure as possible by default. This principle applies to all Internet usage of computers. Yet Microsoft blatantly violated it in the following: Office Macros, email attachments, NT/Windows 2000 Server config (running IIS by default), Hotmail...
I've heard this one! (Score:5, Interesting)
Linus better do some complainin'... (Score:4, Flamebait)
What's wrong with that picture? Linux *is also* a registered trademark, Microsoft. I suggest you recognize it as such.
Linus, kick some ass here.
And in similar news.. (Score:5, Funny)
Because, if the security hole didn't exist in the first place, then Microsoft wouldn't have to worry about all this bad press starting to cost them business; and more importantly mindshare.
Re:And in similar news.. (Score:5, Funny)
Re:And in similar news.. (Score:4, Funny)
New Slogan (Score:3, Insightful)
Y'know, if they didn't have so many bugs, there wouldn't be anything to release, and therefor, no 'weapons' to build... it's kinda like an army making a tank with wooden components inside, then getting pissy when the other army brings flamethrowers and napalm...
A weak point (Score:2, Funny)
Doesn't this guy realize that our systems are becoming more secure everyday, now that people have to take worms, trojans, DoS attacks seriously. Maybe he should bet back to securing Microsoft products and spend less time complaining about system admins trying to share info.
Hiding security flaws... (Score:3, Interesting)
And hiding all these security flaws would of made windows more secure? Your product is not secure, stop passing the buck.
Still leaking? (Score:4, Insightful)
Let's stop anthrax, too! (Score:5, Funny)
In other news, Master Lock wants to release a new model made out of twine and butter. They ask the community to avoid discussing the security of the lock, since they anticipate it getting deployed widely, and once the ButterLock is being used to secure mission-critical systems, it will be extremely important to keep its flaws a secret.
Re:Let's stop anthrax, too! (Score:3, Interesting)
In a adversarial environment like computer security, you can't be any good if you only understand one side of the game. Even if you are a "good guy" you must understand how to be a "bad guy" to be worth anything. It's impossible to write antivirus software or truly understand viruses without looking at the code for them. It's impossible to develop a good cryptosystem if you don't have a detailed understanding why previous systems are bad.
Many people don't quite get how a buffer overflow works (or why they should check buffer limits in their code) until someone describes how the attack works in painstaking detail. This person will now check their buffer limits, but they also know how to write a buffer overflow attack if they are maliciously inclined - a net gain in my book.
In more general terms, the Army trains people who will never do anything except defend their position in how to attack. Law schools don't break criminal law into classes on prosecution and defense, and police study methods used by criminals. But hey, Microsoft says software is too complex for this traditional process of learning how to defend.
Well, it IS a two way street. (Score:5, Insightful)
The system relies on the reaction time of the programmers.. can they supply a patch before the crackers supply an exploit?
Those of us in the *nix world seem to do pretty good.. for all sorts of reasons you don't need to go into here. Windows? Heh.. it can take months for something to get patched up. No wonder he's mad that these 'blueprints' are being provided. It's simply an extension of the security through obscurity mode of thought.
Re:Well, it IS a two way street. (Score:4, Informative)
1. always notify the vendor first.
2. always wait 2 weeks for a patch.
3. don't release on weekends or very late at night (sorry, other side of the globe.. i'm in the US)
4. always supply an exploit, if one is possible.
And even with all this in place sysadmins still wouldn't patch the problem until they got hacked. If someone doesn't patch their system after all of these steps nothing can make them.
Scott Culp seems to think that the number of hacks will go down solely by eliminating #4, while in actuality the other 3 steps are the ones which get more boxes hacked. With you average buffer overflow thousands of hackers could write an exploit within maybe two or three hours of seeing a bugtraq post. Not notifying the vendor can cause havoc for weeks before a patch is issued.
Re:Well, it IS a two way street. (Score:4, Interesting)
Really? Is that why their service packs keep breaking your machine instead of fixing it? NT4 Service Pack 2 was widely known as "service pack of death". HP refused to support their own machines running NT4 with service pack 4 (while at the same time advertizing "the unstoppable windows nt"). Service pack 6 broke Lotus and was quickly replaced by service pack 6a. They are also known to release patches that undo previous patches. And that's just the stuff I can think of off the top of my head.
Furthermore, Microsoft patches frequently break third party software. Is it because they don't test or is it intentional? Hmmm.....
What fscking loser (Score:3, Funny)
Zot!
whose obligation to protect? (Score:5, Insightful)
I'm not sure whether anyone, other than law-enforcement agents, is obligated to protect computer users, but if anyone is, surely the people who produce the software are more obligated to prevent or solve these problems than are those who merely report on them.
Is this, along with the U.S. government's warning to news agencies to be careful what they broadcast, a sign of a new trend?
We've seen what they propose (Score:4, Insightful)
Several times we've seen security experts say to a large company, "Hey! there's a nasty exploit here!" The large company indicates they'll fix it and ignores the problem. Only when the exploit is publicized do companies like Microsoft actually take the effort to fix the code. Releasing the information is the only way. Perhaps out of courtesy the security community could give the company with the bug a week's notice.
Re:We've seen what they propose (Score:5, Informative)
From the bugtraq FAQ [securityfocus.com] (securityfocus.com):
0.1.8 What is the proper protocol to report a security vulnerability?
A sensible protocol to follow while reporting a security vulnerability is as follows:
When is it advisable to post to the list without contacting the vendor?
Don't they already provide a grace period? (Score:5, Insightful)
I thought most security exploits that get released by the major groups are usually passed through MS first and allow them time to provide a patch before issuing the details of the exploit. So why are they so upset? Its not MS nor the security experts who are at fault for not patching machines. At least by publishing them they are provided an incentive to staying on top of security holes, instead of simply allowing them to remain secret. I mean none of the major exploits lately (code red, nimda, etc.) have used unpublished exploits. So this shows a failing in MS's procedures for keeping admins informed and a failing in the admins for keeping on top of their networks. Its such a non-issue, I think MS just wants to preempt law suits or some other such silliness.
Re:Don't they already provide a grace period? (Score:3, Interesting)
It begs the question though... if the supposed reason that the source is released is because the vendor didn't respond to the threat, then why does the source to the exploit STILL get released even if the vendor DOES issue a patch?
I can see what's going to happen... (Score:4, Insightful)
God damn... when did I get so cynical? Oh yeah, after reboot #3 of NT 4.0 today. {grumble grumble grumble}
It is a good point (Score:3, Interesting)
Re:It is a good point (Score:3, Insightful)
A) Skr1pt Kiddi3z who will enter your system and possibly scrawl "I love you rhonda!" on your front page.
B) Highly professional "black hat" who will enter your system, steal your new revolutionary prototype plans and provide them for a small charge to your competitor who will get it to market six months before you.
The current system allows lots of the first kind, but helps prevent many of the second. Microsoft's proposal will reverse this. High profile attacks generally do very little "real" damage, normally just some downtime or some ugly defacements. The attacks that you don't see, or in this case, WON'T EVER SEE, are the ones that will turn your business from market leader to bankruptcy auction...
To prevent attacks, you must think like attacker. (Score:5, Interesting)
Of course, MS just wants to skirt responsibility for negligance on their part.
Bug control (Score:3, Funny)
What a great idea! Then all the malicious hackers will know how to exploit security holes, while those in charge of security won't. Wait a second...isn't that kind of like asking security guards not to carry guns, because those guns might hurt someone?
Full disclosure? (Score:5, Insightful)
Hmm, this has always seemed to be a hot discussion...I'm all for full disclosure, but is it really necessary for people to include exploit code?
One argument is that it can help people to test their systems for vulnerabilities, bit I think that exploit code is not strictly necessary for this. People who really need it to test systems are in a position where they should have the capability or the resources to generate a "test script" for themselves, once given an accurate description of the vulnerability.
Making code exploits freely available possibly creates more opportunity for the low-life script kiddies who often don't appreciate exactly what they are doing, or the mechanics of the exploits that they are using. Why should we make it easy for those guys?
My opinion on this element of full disclosure is still not complete though, and I am fully prepared to be convinced... :)
-- Pete.
Re:Full disclosure? (Score:5, Insightful)
It's what L0pht prided themselves on for years, after having MS dismiss their whitepapers as improbable, theoretical, impossible, etc.
Re:Full disclosure? (Score:3, Insightful)
I'm all for full disclosure, but is it really necessary for people to include exploit code?
some things are easiest to communicate with sample code. in the absence of the original source code, in which case you could say "look, this function is overrunning this buffer," it would probably be easiest to demonstrate the exact nature of a security flaw using exploit code. although even in the circumstances where you have the original source, having exploit code to look at couldn't hurt in fixing the problem.
my personal feelings on this is that exploit code should first be sent to the maintainer of the original program, with a deadline for the release of a patch. there should also be a public release describing the problem in a very generic nature. after the deadline, release the exploit, even if the patch isn't out yet. this gives developers time to fix the problem without putting the exploit in the hands of script kiddies. plus, the developers are under a deadline to get it fixed. granted, it's entirely possible for the kiddies to already have code to exploit it, but why give them the tools before it's necessary?
Comment removed (Score:3, Informative)
Okay, (Score:4, Informative)
here we go:
"It's high time the security community stopped providing the blueprints for building these weapons..."
How about providing the blueprints to your code, so we can secure the systems you release broken to begin with?
I'm not anti-Microsoft (although I'm getting there, definitely getting there...), I do Windows development also in Visual Studio. I'm near the point of stopping that altogether though. My company is already using Linux for damn near everything (including desktops, not just hosting) anyhow.
This is more than just your average case of idiocy from MS. If I ran a pharmaceutical company, and a drug we produced killed 500 people, do you think the public would accept some excuse like this? "No, really, it's all the fault of the doctors who showed their patients how to take the pills..."
Maybe not a perfect analogy, but equally stupid. When will they learn? Probably when Joe Customer starts realizing how indecent their blame machine really is. Apache isn't perfect, Linux isn't perfect... but we admit this and work toward solutions. Average Joe won't stay completely blind forever; most people aren't stupid (my faith in humanity talking here), and you can't fool anyone indefinitely.
Damn, and I was cutting down on my smoking...
they really should stop giving actual code (Score:5, Insightful)
If security experts took the time to make exploit code an exercise for the reader, we might someday end up with skript kiddies who can even write their own hardware drivers for Linux. They might even learn to write and discover new exploits for Windows without the help of security experts.
Microsoft got it on the nose this time
Are you serious?! (Score:3)
Actually, sample code is a very good way to illustrate the severity of a bug.
A bug might be the result of absolutely brutal programming, but require a programmer to jump through hoops to exploit it. In this sense, the bug isn't so bad, and users can assess the path to patching said holes. On the other hand, a bug could be the result of complex, innocent oversight which can be exploited with 3 lines of code.
I, for one, think knowing the code to exploit the bug can give admins a good sense of addressing patch priorities.
Yeah, the security pundits will tell me 'you should be patching 10 secs after the patch comes out regardless of severity', but if you really take that route, you're living in a vacuum. The rest of the world has to worry about priorities
And I'm of the opinion that trusting MS's stance on the 'severity' of a given bug is about as big a security hole as you can have.
(Please remember to flame me on both sides, for even cooking
Re:Are you serious?! (Score:5, Insightful)
Exploit code and exact details let you rig together protection with a firewall, or turning off an optional service, until you feel that a suitable patch is available.
Security Watchdogs' Obligation (Score:4, Troll)
My software providers have an obligation to provide me with secure software or none at all. I commend both Debian and Apple for responding to their occasional security problems in a timely manner.
In the olden days when watchdogs did not release sample code some software providers downplayed their flaws as theoretical problems. If the software providers had been responsive to security flaws, there would be no need for sample code.
Entirely wrong focus... (Score:3, Informative)
If holes were not posted, the public would not even know their software is insecure, and it would surely take longer for any company to patch said holes.
Finally, doesn't blame ultimately fall on the company who made the buggy software in the first place? If I come up with a mathematical formula that proves 2 + 2 = 5, and a math teacher proves that I'm incorrect, who's to blame here? Microsoft believes the math teacher is wrong, something which is obviously misguided.
One final thing: I don't see Linux/BSD/Apple execs complaining.
linux exploits? (Score:5, Insightful)
Typical response from an overworked manager. (Score:4, Insightful)
So what does he do? He posts an essay which is basically a reflection of his anxiety. However, he misses two very key points on why this information anarchy is a good thing.
* Patches for popular software that are exploitable tend to come out real quick because the company has to save face and perhaps protect against liability suits.
* A necessary fear is instilled into companies to put software through a secuirty audi before it goes into production.
I hope this guy takes a vacation somewhere on the beach to reflect on his thoughts.
Valid Uses of Exploits (Score:3, Insightful)
Unfortunately, it's simply untrue that there aren't positive reasons for releasing exploits.
I can think of several: testing of machines (risky, but useful), understanding of vulnerability (CERT advisories are pretty much useless for this.), research.
The most important of these (IMHO) is the understanding of the vulnerabilities. In the past, we didn't even talk about vulnerabilities in the open and we have the abhorrent state of affairs we have today. Security isn't even taught in computer science and engineering curricula and when it is, it's treated as a separate set of classes. When I started working in infosec, I had no idea how the exploits worked and what the real coding vulnerabilities were. Without release of exploits, I probably still wouldn't.
Rehash of same stupid argument on BugTraq (Score:4, Informative)
The short story is that eEye's announcement had absolutely nothing to do with Code Red. The person(s) who developed Code Red figured out the exploit on their own. For more details check out Marc Maiffret's (of eEye) email to the Bugtraq list: http://www.securityfocus.com/cgi-bin/archive.pl?i
People who argue that full disclosure is harmful just fail to realize the facts of the matter- people who write these attacks all aren't script kiddies and they're quite capable of developing attacks on their own. And the reality is that most vendors only respond to full disclosure to actually fix bugs (and even then it takes too long).
Nuff said.
Hmmm, let's see here (Score:3, Funny)
"It's high time that the security industry stopped pointing out all of the blatant security flaws in our programs", Culp writes. "Since we insist on developing OSes and highly-integrated applications tuned for usability, rather than security, we can't make as much money as we're accustomed to making, what with all of these viruses/worms targeted at our products."
Culp adds, "it's time that the security industry be held responsible for these worms and viruses, rather than the companies who make products such as ours. By pointing the finger at the amorphous 'security industry', we're better able to deflect blame for the recent rash of high-profile MS OS and web server exploits."
Just baffling (Score:3, Interesting)
I would suggest to Bill & Co. that it is published with the highest regard for how the information will be used. Just because it could be used in a negative way doesn't mean that nobody's thought about it. There's not a security guy out there who hasn't at some time weighed the pros and cons of releasing information like that.
And am I the only one who is insulted by the gratuitous use of the word "weapons", so as to implicitly equate hacking with physical terrorism and fan the flames of paranoia?
Microsoft FUD (Score:3, Insightful)
MS had it too easy for too long regarding security issues, especially with the news media reporting Outlook vulnerabilitys not as they really are, as a design flaw in Outlook, but as "e-mail viruses."
"Behind every great fortune there is a crime."
- Honoré de Balzac
"You hear a lot about Bill Gates, don't you, whose net worth in January of the year 2000 was equivalent to the combined net worth of the hundred and twenty million poorest Americans, which says something, not only about the software imitator from Redmond, Washington, it says something about millions of workers who work year after year, decade after decade, and are essentially broke."
- Ralph Nader
Re:Microsoft FUD (Score:3, Interesting)
They are a flaw in Windows itself, mainly.
This flaw is a flaw of *nix systems as well, and the flaw is using ACL's, rather than Capability systems.
Read the Confused Deputy paper for more information.
YOU Are The Problem (Score:5, Funny)
NEW MICROSOFT JARGON ALERT: (Score:4, Insightful)
Expect to see this term bandied about frequently.
Isn't it ironic... (Score:3, Insightful)
IMO, a resopnse (Score:5, Interesting)
Ok, I'm going to be snide, the author points to the exploitation tools, but one could also argue that windows (don't laff) "security model", closed source apps, IIS are the *initial* tools of exploitation. Lest I forget, Integration, legislation, co-opting, barriers to entry keep other (maybe better, maybe worse) products from hitting the market and (say it with me) promoting competition.
It's high time the security community stopped providing blueprints for building these weapons. And it's high time computer users insisted that the security community live up to its obligation to protect them.
Why? No one believed that certain (ford/chevy?) trucks would blow up like a bomb when hit from the side...what did they do? Yep, they *Proved IT*, by staging a scenario.
And, not to pick nits or be too smarmy, but "we" are trying to protect users. The fact that PHB's, average users don't *listen* after the 3rd, forth, fifth time of being hacked, wormed, virused, or trojaned via outlook, IIS, IE seem to be nicely sidestepped.
Uh, yes it does...by choosing the most secure of the bunch! No platform is perfect, but if you choose the one with the best track record, gee, you get...surprise, surprise...less of a chance of being exploited. Once bitten, twice shy... but, then again, see my above paragraph with users/phb's.
Ok, I'll ignore the buzzword bingo opportunity, and point out that the author does "get it" a little, that the vulnerabilities mentioned had been patched weeks/months ahead of time.
Ok, cool, Correct me if I a wrong, but I recall seeing a recent article that Microsoft said it needs to "Prioritize" its patches, because, heh, it is confusing!!!
The thing to be rememberd in reading this article the dangerous assumption is this:
If an exploit is found and is dangerous "the security community" *needs* these to tear into and discover how to fight whatever threatens the systems in question.
I'd rather have a fulling working exploit in the hands of a "white hat" than a "black hat".
Don't forget, please, that most of the worms propagated as the result of *malicous* intent and were discovered, stopped, slowed by people with *clear/clean* intent.
That fact seem to be missing.
Moose.
If I am right, I am right...but if I am wrong, show me I a wrong.
Unbe-f**kin-lievable... (Score:3, Interesting)
The people who found the .IDA expoit (eEye security) told MS, and waited until a patch was available before making the press release.
Not only that, but Microsoft thanked eEye in their own press release.
Not only that, but it has been proven beyond all doubt that Code Red, + CRII were based on old exploit code, NOT eEye sample code.
Not only that but the old exploit code that Code Red etc. re-hashed, exploited a hole that was fixed by MS in the traditional manner, i.e. with no exploit sample code published, etc. If the original exploit code that Code Red built on was made public in the same way as the .IDA vulnerability was, the f**kin' thing would never have happened, because every competent IDS system out there would have caught Code Red before it even got off the ground.
The whole thing makes me sick. I can't believe that after Microsoft blitzing^W attempting to blitz the media with it's "renewed security efforts" that they let this slip past marketing. If this is what happened, then before they can even think about 'locking down' IIS, they need to examine their own attitude, and consider abandoning the tried-and-tested-and-FAILED 'security through obscurity' route.
This guy thinks admins are idiots (Score:5, Informative)
This is pure bullshit. It is *extremely* important to understand how these worms and viruses work in order to respond effectively to such threats.
If I, as a programmer, was writing a web application in C that could potentially be remotely exploited via buffer overflow, such information is *absolutely fucking critical* to me, so that i can write safe code.
M$ seem to suffer from the delusion that they are the only people in the world actually writing computer programs.
This unbelievable arrogance is getting pretty tired, and i imagine that we'll be seeing some pretty big anti-M$ stances being taken by previously devout believers in the near future.
If you can't put up, M$, then for christs sake shut up.
Inaccurate view of exploits (Score:3, Insightful)
The Cisco 675 DSL router/modem. This device has very widespread use consumer home and SOHO environments. Other Ciscos in that line were included in a particular issue that cause the router to hang completely until power cycled. Cisco was first notified about this January 10 2000 (no typo there, 01-10-00). A very easy to prove situation was shown to cause this. After 11 months of waiting and two notifications to Cisco, the notifier had given up on Cisco doing The Right Thing (c), and notified BugTraq about the problem, in this [securityfocus.com] post, Nov 28th, 2000. Users from around the world tested, and verified the issue. Want to know what happened? Nothing. Not a peep from Cisco about this, untill recently. The vulnerability DOS in the Cisco was never acknowledged by Cisco, and still isn't admitted. However, a notification of DOS vulnerability was finally admitted by Cisco here [securityfocus.com], 8-24-2001. Nineteen months since being notified. However, the entire reason for this wasn't the vulnerability mentioned of a skewed HTTP request, but simply its inability to handle multiple http connections. Why? Code Red. The Code Red virus was banging on port 80 so hard that the routers would lock up hard and die until reset. Many thousands of DSL customers were affected by this, and IMHO, a redux of the HTTP code that should have been done over a year and a half before, would have prevented the entire nightmare of Code Red issues for owners of the Cisco 675 (Their systems are another story however).
Checking for other 'exploit code' on the BugTraq list should show that the people who create it are responsible, usually doing no more than running a 'whoami' in the case of elevated privileges. They don't arm 'script kiddiez', they do it themselves, however the proof that a hole is exploitable is all someone needs to write their own. This is not a bad thing, this is a good thing.
It is general policy on BugTraq that companies be notified and given sufficient time to resolve issues, usually 3 months or so. If that lapses, it is the infosec engineers responsibility to post the exploit for the world. The company won't listed to the voice of one competant person, but they will listen when their entire customer base gets proof that the company shirked on their responsibilities to protect their customers.
Toodles
Interests of Software Manufacturers and Consumers (Score:3, Informative)
It appears that the advantage of releasing sample code to exploit flaws in computer systems places increased pressure to fix the bug on the manufacturer. This is good, but at a compromise which places serious risk to the consumers of the product. Once suspect code is released, the potential for damage to consumer systems is exponentially increased because the tools to do damage are then available to anybody. Both sides have valid points, but perhaps a set of guidelines to report such bugs which take into account the interests of all involved parties is crucial.
As far as I am concerned, there are five levels of releasing this information which could be used to balance these interests: 1. Say nothing and somebody else will exploit the bug 2. release this information to the manufacturer of the software product and hope they do something about it 3. release a summary of the bug enough so it is realized by the general public 4. release technical information on what theories are used to exploit the flow 5. release the tools necessary to exploit the flaw
The above could be thought of as an agenda for the order in which to release word of any flaws, where one step succeeds the other, starting at #2. 5 should be used with extreme caution - in other words: know what you're doing before using this step, because then anybody can make a toy of the tool to execute the exploit on anybody's system.
This reminds me of a patch from Novell (Score:4, Interesting)
difficult problem, but this is not the solution (Score:4, Insightful)
The security community is so large and diverse that effective controls on exploit code and detailed vulnerability information is impossible. Who would determine who gets access? Microsoft? The US Government? The only practical method is the public one.
The enemy is not Microsoft's unwillingness to produce patches for their security vulnerabilities. They have actually proven to be one of the more cooperative vendors for recognizing flaws and producing and releasing patches, at least in recent times.
The enemy is not the public release of explicit vulnerability information, which is necessary for security research.
The enemy is also not the 13-year-old that breaks into computers. Fighting a war against 13-year-olds is a dumb war.
The enemy is the fact that software vendors like Microsoft have consistently chosen to place their customers at a ridiculous amount of risk through default configurations of their software, and the fact that a 13-year-old can break into thousands of computers with little effort or skill.
Why is it that default configurations of all major OSes (note that I'm not singling out Windows here, I'm saying all OSes) come with an absurd amounts of default services open? If the vast majority of customers do not need a service running, then it should not be running. How many nimda infections were from people who had no idea they were running a web server in the first place?
Why is it that default configurations of most prominent workstation and network client software has poor default configurations, security-wise? Do most users out there really need ActiveX or Javascript in their email client? Not only no, but hell no.
Yes, vulnerabilities do occur in all software. I don't think that anyone out there has any expection for Microsoft or any other vendor to achieve perfection here. However, the issue here is that the default posture leaves users prone not just to known vulnerabilities, but to ones that have yet to be discovered.
All software vendors (including but not limited to Microsoft) need to better examine the features of their products to discover potential points of attack. If the majority of users have no need for a particular feature that might be dangerous at some later point in time (e.g., mobile code capabilities, network services, modules to network services like IIS index server, etc.), then they should be disabled by default. Go ahead and make an easy-to-use checkbox for turning that kind of stuff on individually, but don't have it on by default.
Microsoft has recently stated that it is beginning a new initiative to ship their products in secure configurations. I believe that they probably will succeed somewhat here, but we've been hearing similar lines of bull for so long that they have no credibility here until they actually prove it.
Microsoft and other vendors should stop whining about the messengers, and should start shipping products with default configurations and initial postures that are likely to withstand existing and future attacks. Default configurations are enemy number one, not public vulnerability research. Let's see some proactive work being done instead of only reactive work. Microsoft has plenty of problems to fix in their own development processes before they worry about fixing the "problems" they feel the security community has.
The real problem: customers unaware of security (Score:4, Insightful)
Also it must be said, that most of the damage the worms did was to the image of microsoft. These worms showed the extent of vulnerable machines all over the world, but had there been no worms there would be even more vulnerable machines now, with backdoors open to anyone intelligent and motivated enough to write their own exploit. All those worms that draw so much publicity to the security flaws are just the tip of the iceberg. Someone really malicious will have the abilities to sneak in through a hole without a ready script, and he won't do it with a worm that creates a lot of traffic, but silently install a backdoor and do whatever he set out to do.
When calculating the damages a worm did, that always includes a complete system check for data integrity, backdoors, etc. But if the hole was there and had to be patched, who is to say, there wasn't someone/thing else than a well known worm that came in, installed backdoors and corrupted data? And that person will probably do far more damage, since he probably choose that computer for a reason. Much damage is already done, when the system had a hole and was attackable for some time, since that means that system security and integrity can no longer be guaranteed. Many worms are only making aware of that fact.
Microsoft could do far more for the security of their products by making people aware of the importance of patches, but probably that doesn't sit well with marketing.
Why not? (Score:3, Insightful)
This is all bull (Score:5, Insightful)
Never once did they contact me or send me a CD with security patches on it. Never did they send me an email to go to a website to download a fix.
I was told, when I registered my product, that they would keep me informed. They have failed to do so.
The recent exploits of IIS were from known problems that had previous patches. Many users did not patch their system. They did not know that they had to patch their system. Despite Microsoft knowing who the users of NT IIS were, they did not attempt to contact those users and let them know that patches were available.
Not only that, until recently Microsoft made it very difficult to find security patches. Their website is large and complex, and items change location all the time. In the past five years finding patches for security fixes of NT systems has gone from extremely easy, to nearly impossible, to finally getting organized and easier again.
Why is it, that after the outbreak of Code Red, it took days before information was available from a link on Microsoft's main page? Because it is bad marketing. Instead I have to go deeper to find that information. There isn't even a generic link for security from the main page.
When you do get to their security page, you are told that Microsoft is doing the radical step of giving Security Tool Kits away for FREE!!! Amazing, you bloody well better give it to me for free. It's your buggy code that had the problem in the first place. I'm a registered user, I haven't received a kit yet.
Microsoft is finally starting to take some initiative with this security thing. But, they shouldn't run around pointing fingers at anyone other than themselves
Re:This is all bull (Score:4, Interesting)
Hard is not the issue (Score:3, Insightful)
When you register a Microsoft product, they thank you by sending you advertisment material. No critical upgrades or anything to that effect. AOL sends off cd-roms to everybody in america - for free, hoping a few will try out their service. Microsoft customers have PAID for their product, but Microsoft does not provide them with even notifications of upgrades/updates.
It's a sad, sad world.
If you don't make it public: My experience... (Score:5, Insightful)
In my most recent finds, not made public yet, there are a number of gross privacy bugs in some pretty major websites ( similar to the hotmail problems, but with banking, news and ecommerce sites ).. Well, besides the difficulty in even finding someone in their organization to tell about the problem, once told they ususally do nothing. So, the question I have is what do I do now? Leave your banking site wide open, or make the exploit public to get something done?
My favorite quote from the essay (Score:5, Insightful)
That isn't the attitude I'd want someone providing my software to take.
Scientific Method Misunderstood (Score:4, Insightful)
Programmers use code to share their experiments because it is the simplest, best, most consistent way to do so. Not asking security and programming experts not to share "blueprints" is like asking toxicologists not to share the chemical formulas for the compounds they're researching.
Mr. Culp needs to take a vacation away from the stress of his job and bone up on how to systemically approach problem solving and the sharing of information used to produce repeatable experiments/tests/exploits.
Don't you dare hack .net (Score:5, Interesting)
"First, let's state the obvious. All of these worms made use of security flaws in the systems they attacked, and if there hadn't been security vulnerabilities in Windows®, Linux, and Solaris®, none of them could have been written. This is a true statement, but it doesn't bring us any closer to a solution. While the industry can and should deliver more secure products, it's unrealistic to expect that we will ever achieve perfection. All non-trivial software contains bugs, and modern software systems are anything but trivial. Indeed, they are among the most complex things humanity has ever developed. Security vulnerabilities are here to stay."
In the above argument, Culp uses truth to validate fallacy. It's true that no code is perfect. It's false that security will improve by mandating gag orders.
More to the point, Microsoft is especially frustrated with flaws being exposed in their code. Frankly, I believe the hacks associated with Microsoft products differ fundamentally from the flaws discovered in Solaris and Linux. When a Linux exploit is discovered, hackers and maintainers consider it a design flaw. Therefore, exploits are generally fixed pretty fast on Linux -- usually within a few days. The same is true for Solaris.
Apparently however, Microsoft does not consider certain exploits to be design flaws. Sometimes, hackers simply leverage "features" (e.g. undocumented APIs) that Microsoft deliberately designed into their applications and/or systems.
Microsoft applications tend to execute arbitrary code. In other words, Microsoft deliberately empowers IIS, Exchange, Internet Explorer, Outlook and certain Office applications to execute unchecked commands fed over the Internet. Once hackers discover these (badly!) hidden APIs, it is only a matter of time before someone sends you an email which does something nasty to your computer.
Interestingly, despite these obvious security issues, Microsoft wants their programs to execute arbitrary code. Remember the Microsoft Word viruses? Remember the Excel viruses? Heck, email viruses were fiction until Exchange and Outlook...
Microsoft has had years of experience and feedback since the first MS-Word virus. Obviously, they understand the risks of allowing applications to execute arbitrary code. Nevertheless, they continue to build this ability into all their major products.
In fact, arbitrary code execution appears to be one of the core technologies behind Microsoft's
Culp states that vulnerabilities are here to stay. Most likely,
At this late stage, re-designing
What makes AIDS so deadly? (Score:4, Insightful)
Many diseases are deadly if untreated. Often the scarriest ones are those that kill silently over time. This is what MS is asking for. Security holes can be an obvious pain or a silent killer. If exploits are not made popular and fixed then the exploit will be available to those who know the most and can potentially do the most harm. Once again this is a plead for a solution that will benefit MS and nobody else.
"QUICK" Online Software Security Lecture course (Score:3, Informative)
If your looking for authority on the subject they come no higher than Dr. Blaine Burnham, Director, Georgia Tech Information Security Center (GTISC) and previously with the National Security Agency (NSA),
"Meeting Future Security Challenges"
http://www.technetcast.com/tnc_play_stream.html
If you listen to Dr Burnhams speech you will understand why it is so important to keep "pushing" Microsoft on its inherent lack of security.
If you want to sleep at night, don't listen to the following speech by Avi Rubin
"Computer System Security: Is There Really a Threat"
http://technetcast.ddj.com/tnc_play_stream.html
If you listen to the above speech then you will begin to understand Steve Gibsons apocalyptic visions.
And if you want more, the effect of broadband access
"Broadband Changes Everything"
http://www.technetcast.com/tnc_play_stream.html
Directly relating to DDoS ( Distributed Denial of Service )
"Analyzing Distributed Denial of Service Tools: The Shaft Case"
http://www.technetcast.com/tnc_play_stream.html
and "Denial of Service"
http://www.technetcast.com/tnc_play_stream.html
And if you want to get *really* technical, listen how difficult and more technical it is to trace spoofed packets[ Warning - this is heavy tech ]
"Tracing Anonymous Packets to Their Approximate Source"
http://www.technetcast.com/tnc_play_stream.html
"I would rather have Loki uncover and exploit our inherent weaknesses now than have the Ice Giants do so at Ragnarok. - David Mohring"
EULA (Score:4, Funny)
Any SECURITY HOLE bundled with the SOFTWARE PRODUCT is the property of Microsoft and protected by copyright laws and international copyright threaties.
Feeling secure with information hiding? (Score:4, Interesting)
Perhaps you could block the request in your packet-filtering system, or at least log it, but without knowing what to look for... what do you do?
And, knowning that experienced black-hat crackers also reads securityfocus and sites like this, they don't need anything more than this information (there is a buffer overflow in IIS... ) and then they have a target for what to do the next couple of hours. It's a competition you know. The best crack wins. Giving away exploits doesn't give much credit to the cracker copying it, but the first one to discover a "new" one, gets a lot of attention...
We need to understand the psychology of what makes a crack worthwile, a published exploit every script kiddie can duplicate, but also can the sysadmins countermeasure this fast (provided that they read the right forums as all sysadms should!)
But a hint of a possibility in a not published exploit gives the black-hats something to compeete for, who is the first one to make the best crack? And the poor end-user is not even knowing what to look for...
Second. published exploits are easy to scan for... known, but not published exploits will fluctuate in their signature.
E.g. special HTTP GET request to look for in the logs... you just scan your logs for exactly the string published in the exploit. (or put it in your packet-filter) a not published exploit will result in several different cracks, using the same vulnerability, but probably vary a bit in the exploit methodology, making it harder to scan for.
Would you dare to use your car if the factory sent you a note that "it has a fault", but not providing any details of the fault? It could be anything...
sounds pretty much like... (Score:3, Insightful)
anyone notice the terminology (Score:4, Insightful)
The phrase "information anarchy" has no coherent meaning other than that defined through MS's statement, and even there it seems to mean "any public publication of security weaknesses in MS products". Yet MS pushes the phrase over and over again in the attempt to link security reports with the word "anarchy" in the hopes that the average idiot will associate publication of flaws in MS software with irresponsible, undemocratic behavior.
Most of us geeks catch this sort of thing right off (e.g., "viral software") but notice - this one slipped under the wire with nary a comment that I could see.
One of MS's greatest weapons is the introduction of language which precludes one mindset and reinforces another - social programming at it's finest. Accepting the phrase "information anarchy" as valid substantiates the idea that such a thing actually exists, even if you argue that the security reports don't constitute an example of this nebulous "information anarchy".
There's no such animal. It's a buzzword with zero meaning other than a poor attempt to lay the blame for MS security holes on people other than those employed at MS.
Perhaps we should retaliate with terminology of our own that's intimately associated with a Microsoft argument or product. Any ideas (other than the "Microsoft worms" phrase of some days back)?
Max
Nimda didn't need HELP in taking down networks (Score:3, Insightful)
Culp is assuming that the only people smart enough to decipher the viruses are the security people themselves, and THAT is the false assumption that invalidates the theory behind the 'essay'...
Two words (Score:3, Insightful)
Now burn, you troll
Re:RTFA (Score:5, Informative)
Speaking as an IIS admin, I get really pissed when I can't find sample code for an exploit. I need to be able to test my systems against a newly published exploit. If I don't have a way to do this all I can do is apply the hotfix and hope it works. What if I want to set up some stateful inspection on my firewall just in case, how do I test that? Without sample code I have no way to really know if I am vulnerable or not. IMHO not testing these things would be a pretty irresposible aproach to managing a datacenter.
Re:RTFA (Score:5, Informative)
Except that that was tried. What happened was that the vendors responded with "We can't reproduce that, you must be mistaken, there's no hole in our product.". After a while, the security community came to the conclusion that the only way to get vendors to wake up and actually fix their products was to release enough details that, if there was any question whether the hole existed, the skeptic could recreate the exploit and try it and see for himself. Which leaves the vendor with no way to spin the story, which is what Microsoft's really pissed off about.
Re:Ya, see.. we do.. (Score:5, Insightful)
It's very useful. For example, you can scan your network for machines running given servers, then launch exploits agains all those that are running, as a double check to find unpatched srervers. Since MS installs servers by default on damn near everything*, without advising the installer, this is the ONLY to be sure your not running unpatched servers. My organization found numerous vulnerable machines this way, even though we thought we had this nailed down.
*(example: Visio 2000 installs MSDE, a form of SQL server, vunerable. CiscoWorks 4.2 (getting old, now) installs IIS vulnerable.)
Microsoft executives on drugs (Score:3, Interesting)
Is something done? no, no funds to shore up security, no funds or resources to fix the problem or be proactive.
It's not microsofts fault, It's the fault of the operators and owners that will not allow their techs to do their job, or give them tools to do their job... Because it's too expensive...
Re:Microsoft executives on drugs MOD UP (Score:3, Interesting)
Meanwhile, in Redmond, someone keeps parroting "We give people what they want." Apparently a lot of us want to be pissed off. If you're in the sysadmin thing, sorry, you have my pity. If you're a worker bee, then don't get your shorts in a knot, make your opinion known once and then kick back and do whatever you have to. Can't deal with it? Get another job. Life's too short to spend being in a bent mood because of some PHB's decision to believe the Redmond propaganda machine.
As for blaming the messenger, whoa, that's only because the messenger has had so much work lately!