Microsoft Blames the Messengers 731
Roger writes: "In an essay published on microsoft.com, Scott Culp, Manager of the Microsoft Security Response Center, calls on security experts to "end information anarchy" and stop releasing sample code that exploits security holes in Windows and other operating systems. "It's high time the security community stopped providing the blueprints for building these weapons," Culp writes in the essay. "And it's high time that computer users insisted that the security community live up to its obligation to protect them." See the story on Cnet News.com."
I've heard this one! (Score:5, Interesting)
Hiding security flaws... (Score:3, Interesting)
And hiding all these security flaws would of made windows more secure? Your product is not secure, stop passing the buck.
It is a good point (Score:3, Interesting)
To prevent attacks, you must think like attacker. (Score:5, Interesting)
Of course, MS just wants to skirt responsibility for negligance on their part.
Re:whose obligation to protect? (Score:2, Interesting)
The government, on the other hand, is letting broadcasters know that
not inform of problems??? (Score:2, Interesting)
I buy a new car. It looks pretty, seems to run good on the lot. Now, the guy across the road sold the dealer the car and he knows that the tires are retreads, the engine has sawdust in it and the doorlocks will open if you kick the door....
Why shouldn't he be able to tell me these things??
I think that mircrosoft should be responsible for thier code. Period.
If I can write code that doesn't break, I would think that the dozens of programers they have hired could do the same. Why isn't there a lemon law for sofware?
Just my pair of odors.
Memo to Microsoft (Score:1, Interesting)
Microsoft, you still don't get it...
I'm a computer user and I do not think for one moment that it is the obligation of the security comunity to protect me. I do not pay them to protect me. I paid you for buggy unsecure software. These security holes are your responsibility.
they are going to be crying more soon enough (Score:2, Interesting)
not sure about the home version, but the pro version has remote administration features all over the place turn on automatically with your install.
I see no good coming of this.
(they have one thing called "remote desktop" which is basically like pcAnywhere, presumably so that you can call customer support and say "I don't know how to do XYZ" and they can then take over your desktop and get it all worked out for you... and hackers will NEVER firgure out how to use that!
they also take over compressed files now (zip and such) and deal with in their own way - which isn't the way I want... annoying.
there are parts of it that are nicer, but for the most part, it just screams "I'm a security hole waiting to happen - hate on me!!!"
Re:Well, it IS a two way street. (Score:2, Interesting)
Just baffling (Score:3, Interesting)
I would suggest to Bill & Co. that it is published with the highest regard for how the information will be used. Just because it could be used in a negative way doesn't mean that nobody's thought about it. There's not a security guy out there who hasn't at some time weighed the pros and cons of releasing information like that.
And am I the only one who is insulted by the gratuitous use of the word "weapons", so as to implicitly equate hacking with physical terrorism and fan the flames of paranoia?
This makes sense.. (Score:2, Interesting)
If you have no intentions of ever fixing any problems discovered with your systems, then of course, you'd want to keep word of problems secret.
Oh, poor Microsoft, the costs of producing and distributing patches must be just a terrible burden. Imagine the burden on the rest of us who have to deal with your buggy systems. I would characterize IIS as a public menace right now.
No, this is just a bad attempt to deny reality: Microsoft's poor practices are coming to light in a way even the average Joe can understand.
Re:Linus better do some complainin'... (Score:2, Interesting)
What's also wrong with this? Um, can anybody remember the name of the worm that recently attacked Linux and Solaris? Darn, I forgot the name, must not have had such a great impact...
IMO, a resopnse (Score:5, Interesting)
Ok, I'm going to be snide, the author points to the exploitation tools, but one could also argue that windows (don't laff) "security model", closed source apps, IIS are the *initial* tools of exploitation. Lest I forget, Integration, legislation, co-opting, barriers to entry keep other (maybe better, maybe worse) products from hitting the market and (say it with me) promoting competition.
It's high time the security community stopped providing blueprints for building these weapons. And it's high time computer users insisted that the security community live up to its obligation to protect them.
Why? No one believed that certain (ford/chevy?) trucks would blow up like a bomb when hit from the side...what did they do? Yep, they *Proved IT*, by staging a scenario.
And, not to pick nits or be too smarmy, but "we" are trying to protect users. The fact that PHB's, average users don't *listen* after the 3rd, forth, fifth time of being hacked, wormed, virused, or trojaned via outlook, IIS, IE seem to be nicely sidestepped.
Uh, yes it does...by choosing the most secure of the bunch! No platform is perfect, but if you choose the one with the best track record, gee, you get...surprise, surprise...less of a chance of being exploited. Once bitten, twice shy... but, then again, see my above paragraph with users/phb's.
Ok, I'll ignore the buzzword bingo opportunity, and point out that the author does "get it" a little, that the vulnerabilities mentioned had been patched weeks/months ahead of time.
Ok, cool, Correct me if I a wrong, but I recall seeing a recent article that Microsoft said it needs to "Prioritize" its patches, because, heh, it is confusing!!!
The thing to be rememberd in reading this article the dangerous assumption is this:
If an exploit is found and is dangerous "the security community" *needs* these to tear into and discover how to fight whatever threatens the systems in question.
I'd rather have a fulling working exploit in the hands of a "white hat" than a "black hat".
Don't forget, please, that most of the worms propagated as the result of *malicous* intent and were discovered, stopped, slowed by people with *clear/clean* intent.
That fact seem to be missing.
Moose.
If I am right, I am right...but if I am wrong, show me I a wrong.
Re:Linus better do some complainin'... (Score:2, Interesting)
Unbe-f**kin-lievable... (Score:3, Interesting)
The people who found the .IDA expoit (eEye security) told MS, and waited until a patch was available before making the press release.
Not only that, but Microsoft thanked eEye in their own press release.
Not only that, but it has been proven beyond all doubt that Code Red, + CRII were based on old exploit code, NOT eEye sample code.
Not only that but the old exploit code that Code Red etc. re-hashed, exploited a hole that was fixed by MS in the traditional manner, i.e. with no exploit sample code published, etc. If the original exploit code that Code Red built on was made public in the same way as the .IDA vulnerability was, the f**kin' thing would never have happened, because every competent IDS system out there would have caught Code Red before it even got off the ground.
The whole thing makes me sick. I can't believe that after Microsoft blitzing^W attempting to blitz the media with it's "renewed security efforts" that they let this slip past marketing. If this is what happened, then before they can even think about 'locking down' IIS, they need to examine their own attitude, and consider abandoning the tried-and-tested-and-FAILED 'security through obscurity' route.
Re:Don't they already provide a grace period? (Score:3, Interesting)
It begs the question though... if the supposed reason that the source is released is because the vendor didn't respond to the threat, then why does the source to the exploit STILL get released even if the vendor DOES issue a patch?
Re:Some other choice quotes : (Score:5, Interesting)
I love this analogy. It actually works.
No, actually it doesnt.
An asprin only relieves the symptom, not the cause. If you get a headache from hitting your head against the wall, an asprin won't stop you from continuing to hit your head against the wall, all it will do is let you do it longer.
Perhaps he can answer this though: without exploit code, how do we know the problem is really fixed? Twice to my knowedge MS has released patches that didn't fix the hole they claimed. Publicly available exploits are a failsafe, they provide an independant means of verifying that the hole is actually closed.
Information anarchy? I wish. (Score:2, Interesting)
If we want secure software, we should write it. If we don't want to write it ourselves, we should be ready to pay for it. If we do want to write it ourselves, we can call it open source. Either way, there is a motivation to make secure programs.
It is possible to write non-trivial programs without security bugs. It is very difficult, so in the mean time we should settle for the best security we can get. The best security is pretty good if you take reasonable precautions like not choosing a password like 'ant'.
So get off your buts, MS, and make your soft ware secure, and not through obscurity!
Re:Some other choice quotes : (Score:3, Interesting)
I think that is the single most important reason for exploit code.
I read one of the new (yes, I know, the old were much better) Tom Swift books where Tom invents some sort of magical force field and, as the acid test, he makes his robot assistant fire a few rounds at him. Of course, it's dangerous to fire a gun at a person, but other than proving its effectiveness beyond any reasonable doubt by examining the mechanism behind the force field (akin to studying the source code in detail, which, since it isn't open to the public, isn't open to scrutiny) there is no other final way of determining that something works other than trying it.
If Microsoft is going to be a closed-source software industry, they're going to have to accept the consequences of their decisions. They have to take full responsibility for their own code. Blaming their problems on something else does not eradicate them.
Microsoft executives on drugs (Score:3, Interesting)
Is something done? no, no funds to shore up security, no funds or resources to fix the problem or be proactive.
It's not microsofts fault, It's the fault of the operators and owners that will not allow their techs to do their job, or give them tools to do their job... Because it's too expensive...
Umm.. (Score:2, Interesting)
Am I the only one who noticed.... (Score:2, Interesting)
Perhaps someone should sent them a friendly tip on Linus' IP rights..... I tried but their comments page doesn't have a comments section to type in. =[
Raw Sockets and M$ (Score:2, Interesting)
This reminds me of a patch from Novell (Score:4, Interesting)
Re:Well, it IS a two way street. (Score:4, Interesting)
Really? Is that why their service packs keep breaking your machine instead of fixing it? NT4 Service Pack 2 was widely known as "service pack of death". HP refused to support their own machines running NT4 with service pack 4 (while at the same time advertizing "the unstoppable windows nt"). Service pack 6 broke Lotus and was quickly replaced by service pack 6a. They are also known to release patches that undo previous patches. And that's just the stuff I can think of off the top of my head.
Furthermore, Microsoft patches frequently break third party software. Is it because they don't test or is it intentional? Hmmm.....
Re:MS FUD (Score:3, Interesting)
Code Red. Lion. Sadmind. Ramen. Nimda. In the past year, computer worms with these names have attacked computer networks around the world, causing billions of dollars of damage. They paralyzed computer networks, destroyed data, and in some cases left infected computers vulnerable to future attacks
then further down -
All of these worms made use of security flaws in the systems they attacked, and if there hadn't been security vulnerabilities in Windows®, Linux, and Solaris®, none of them could have been written. This is a true statement, but it doesn't bring us any closer to a solution.
Basically they are attempting to put Solaris and Linux in the same boat as M$ware, it looks like the author Scott Culp hasnt met his quarterly quota for marketing FUD and so has thrown that *cough* article together to make up for it.
Re:Microsoft executives on drugs (Score:2, Interesting)
How can it not be Microsoft's fault for release what you call "the most insecure" products? Perhaps Microsoft's response could be, "Well, heck, you knew our products are insecure, it's your own fault for using them!"
Re:Microsoft FUD (Score:3, Interesting)
They are a flaw in Windows itself, mainly.
This flaw is a flaw of *nix systems as well, and the flaw is using ACL's, rather than Capability systems.
Read the Confused Deputy paper for more information.
Re:Microsoft executives on drugs MOD UP (Score:3, Interesting)
Meanwhile, in Redmond, someone keeps parroting "We give people what they want." Apparently a lot of us want to be pissed off. If you're in the sysadmin thing, sorry, you have my pity. If you're a worker bee, then don't get your shorts in a knot, make your opinion known once and then kick back and do whatever you have to. Can't deal with it? Get another job. Life's too short to spend being in a bent mood because of some PHB's decision to believe the Redmond propaganda machine.
As for blaming the messenger, whoa, that's only because the messenger has had so much work lately!
Re:MS (Score:3, Interesting)
"If someone breaks into your house because you had a lock that could be bypassed with a special lockpick, it's not the lockmaker's fault, but the fault of whoever it was that gave you the special lockpick"
I disagree.
When I buy a lock, I expect it to be secure, and I expect that the manufacturer has tested the lock against most common circumvention methods. I would be damned pissed off if my lock were openable by using any old key blank.
Similarly, when I buy server software, I expect it to hold up against point-blank buffer overflows and backdoors/side effects so large you could drive a truck through. I mean jesus, I can get free software where the authors have spent more time making sure that stupid shit doesn't get through. Some code monkey getting paid $x/hr should at least have a monetary incentive to check over the code, shouldn't they??
Or let's take a look from a different angle. I pay money for software. If it costs me money and time when it falls down, I expect to be able to get money out of the manufacturer or at least get timely fixes or decent technical support. What am I paying them for anyway?
Re:Let's stop anthrax, too! (Score:3, Interesting)
In a adversarial environment like computer security, you can't be any good if you only understand one side of the game. Even if you are a "good guy" you must understand how to be a "bad guy" to be worth anything. It's impossible to write antivirus software or truly understand viruses without looking at the code for them. It's impossible to develop a good cryptosystem if you don't have a detailed understanding why previous systems are bad.
Many people don't quite get how a buffer overflow works (or why they should check buffer limits in their code) until someone describes how the attack works in painstaking detail. This person will now check their buffer limits, but they also know how to write a buffer overflow attack if they are maliciously inclined - a net gain in my book.
In more general terms, the Army trains people who will never do anything except defend their position in how to attack. Law schools don't break criminal law into classes on prosecution and defense, and police study methods used by criminals. But hey, Microsoft says software is too complex for this traditional process of learning how to defend.
Don't you dare hack .net (Score:5, Interesting)
"First, let's state the obvious. All of these worms made use of security flaws in the systems they attacked, and if there hadn't been security vulnerabilities in Windows®, Linux, and Solaris®, none of them could have been written. This is a true statement, but it doesn't bring us any closer to a solution. While the industry can and should deliver more secure products, it's unrealistic to expect that we will ever achieve perfection. All non-trivial software contains bugs, and modern software systems are anything but trivial. Indeed, they are among the most complex things humanity has ever developed. Security vulnerabilities are here to stay."
In the above argument, Culp uses truth to validate fallacy. It's true that no code is perfect. It's false that security will improve by mandating gag orders.
More to the point, Microsoft is especially frustrated with flaws being exposed in their code. Frankly, I believe the hacks associated with Microsoft products differ fundamentally from the flaws discovered in Solaris and Linux. When a Linux exploit is discovered, hackers and maintainers consider it a design flaw. Therefore, exploits are generally fixed pretty fast on Linux -- usually within a few days. The same is true for Solaris.
Apparently however, Microsoft does not consider certain exploits to be design flaws. Sometimes, hackers simply leverage "features" (e.g. undocumented APIs) that Microsoft deliberately designed into their applications and/or systems.
Microsoft applications tend to execute arbitrary code. In other words, Microsoft deliberately empowers IIS, Exchange, Internet Explorer, Outlook and certain Office applications to execute unchecked commands fed over the Internet. Once hackers discover these (badly!) hidden APIs, it is only a matter of time before someone sends you an email which does something nasty to your computer.
Interestingly, despite these obvious security issues, Microsoft wants their programs to execute arbitrary code. Remember the Microsoft Word viruses? Remember the Excel viruses? Heck, email viruses were fiction until Exchange and Outlook...
Microsoft has had years of experience and feedback since the first MS-Word virus. Obviously, they understand the risks of allowing applications to execute arbitrary code. Nevertheless, they continue to build this ability into all their major products.
In fact, arbitrary code execution appears to be one of the core technologies behind Microsoft's
Culp states that vulnerabilities are here to stay. Most likely,
At this late stage, re-designing
When all else fails, litigate (Score:3, Interesting)
Considering that this essay is from Microsoft, I think it reads clearly as a thinly veiled threat to sue anyone who points out vulnerabilities in Microsoft products (UCITA, anyone?). In Microsoft logic, if people stop publishing vulnerabilities for fear of being sued, then the problem of people exploiting known vulnerabilities goes away. This logic is akin to leaving a bank vault wide open, but turning off the lights so thieves won't see it.
In the land of real people, litigation will not solve the problem, and Microsoft needs to know this. The first security expert to get sued will be screwed, but by that time the vulnerability will have been made public, and thus be exploitable. This lawsuit will leave a bad taste in the mouths of the "self-described security community," so that the next exploit that is found will be exploited rather than published. When people start abandoning their products en masse because of constant security problems, Microsoft may realize that they shouldn't've angered the people who point out the chinks in their armor.
Dumb threats from big companies forced hands (Score:1, Interesting)
Fight the instigator, not the messenger (Score:2, Interesting)
MS should be flogging their inept staff for putting so many critical ones in; then flog their QA for not finding the serious ones. Yes, they have some very complicated products, but there's a such thing as unit testing, and dammit, they haven't done any (or enough).
Re:This is all bull (Score:4, Interesting)
who's job? (Score:1, Interesting)
good lord, this should be the job of those who create, promote and most of all charge for this cr*ppy os.
Coolness factor (Score:2, Interesting)
Like someone posted into some other discussion here a few days ago, making exploits public probably reduces the need for potential wannabes or semi-blackhats to compete in the field. What's cool in that if you can do the same as 10000 other similar people, as everything is written already. All you need is gcc -o nukem2 nukem.c.
Closing exploits, or further, even all security hole announcements, could rise a hell, engaging all competent-enough wannabes writing exploits to compete with eachother. Once again there would be a social gain by doing the best exploit in the shortest time.
Yet there are still enough script kiddyzzz to cause harm if companies don't deliver patches and if admins don't install them, thus, getting things get fixed. Would Microsoft ever raise an eyebrow to any security hole if there were no public means to exploit them? Only then, outlawed blackhats would overflow buffers and assuming that they were pros, no one wouldn't probably notice anything until one morning something completely different had happened during the night...
Feeling secure with information hiding? (Score:4, Interesting)
Perhaps you could block the request in your packet-filtering system, or at least log it, but without knowing what to look for... what do you do?
And, knowning that experienced black-hat crackers also reads securityfocus and sites like this, they don't need anything more than this information (there is a buffer overflow in IIS... ) and then they have a target for what to do the next couple of hours. It's a competition you know. The best crack wins. Giving away exploits doesn't give much credit to the cracker copying it, but the first one to discover a "new" one, gets a lot of attention...
We need to understand the psychology of what makes a crack worthwile, a published exploit every script kiddie can duplicate, but also can the sysadmins countermeasure this fast (provided that they read the right forums as all sysadms should!)
But a hint of a possibility in a not published exploit gives the black-hats something to compeete for, who is the first one to make the best crack? And the poor end-user is not even knowing what to look for...
Second. published exploits are easy to scan for... known, but not published exploits will fluctuate in their signature.
E.g. special HTTP GET request to look for in the logs... you just scan your logs for exactly the string published in the exploit. (or put it in your packet-filter) a not published exploit will result in several different cracks, using the same vulnerability, but probably vary a bit in the exploit methodology, making it harder to scan for.
Would you dare to use your car if the factory sent you a note that "it has a fault", but not providing any details of the fault? It could be anything...
A possible response (Score:3, Interesting)
If I was a MS spokeman, I might answer this by saying:
"Exploits are a proper test of the validity of a patch, but it is not necessary to publish them. They can be developed and tested in closed labs and only the results published."
To which I would have to ask: "Whose lab and how can we trust them?"
Re:Full disclosure? (Score:2, Interesting)
I have *never* approached security the same way since then. I have *always* taken every vulnerability seriously after that. Before then, hacking is what happened to the other guy or was difficult, but when I saw it, it changed me. To me, that's why it's important.