Schneier On Full Disclosure 232
Bruce let me know that he's written a piece on ZDNet (original home of the for the Window of Exposure idea is on Counterpane ? ) about the problems of not following full disclosure. Very well written and does a great job of summarizing why full disclosure works. The original piece from Culp @ Microsoft is also available, along with the PowerPoint that they did.
Remember! (Score:3, Funny)
Re:Remember! (Score:2, Funny)
MS has made a big mistake (Score:2, Insightful)
You're right that it's a marketing decision (Score:2, Interesting)
Re:MS has made a big mistake (Score:2)
I know this, you know this, the marketing team of Microsoft (or any other software, hardware, car, screws, whatever vendor) don't. Admitting a vunlerability is admitting a flaw.
"If a product is on sale, it has no flaws" is what marketeers repeat to themselves like a mantra, it doesn't matter whether said product might even not be working.
Re:MS has made a big mistake (Score:2)
I am for full disclosure but... (Score:2, Insightful)
Re:I am for full disclosure but... (Score:5, Interesting)
So just hiding information doesn't necessarily make you more secure.
sPh
[1] OK, the Soviet Union had spies inside the project before it started, but that doesn't count!
Re:I am for full disclosure but... (Score:1)
How does that not count? In fact, how does that not discredit the notion that the lack of information clued the Soviets to the existence of a cover-up?
More to the point, who is going to assume that their software is insecure based on the lack of security updates? I'm not sure that Cold-War paranoia translates to the consumer software market so readily.
Re:I am for full disclosure but... (Score:3, Interesting)
Copper was being used elsewhere in the war effort, so:
and Swiped from http://members.aol.com/fmcguff/dwmodel/intro.htm [aol.com]Re:I am for full disclosure but... (Score:3, Funny)
Of course, this was before somebody suggesting using Uranium and Plutonium. They gave the silver back because it wouldn't blow up. Uranium makes really lousy money on the other hand. Is has a good weight, and it's a bit warm to the touch, giving it a nice feel in your hands. But it tended to cause tumors on the upper thigh, right where trouser's pockets are. So for the treasury and the war department, it was what you'd call a "win-win situation".
Re:I am for full disclosure but... (Score:2)
The reports I've heard that silver basically disappeared from the market make me think that they may well have purchased a large quantity and driven prices up before they attempted to borrow it from the Treasury, which could also account for a high price.
Re:I am for full disclosure but... (Score:2, Informative)
You're kidding, right? Anyone who's read Feynman's book [fatbrain.com] on the subject would know that the security was a joke. Fences with holes in them, inattentive guards, insecure safes, and poor whistleblowing policies were all part of the Manhattan Project's "security". Secondly, the security was handled by the military, not the FBI.
Neat trick, since the Manhattan Project started in 1942. The absence of public information did tip off Kurchatov, but keeping your people from publishing in journals isn't hard. It's keeping spies from passing secrets to a foreign agent outside a diner 50 miles from the secure facility that presents a problem.
David Greenglass, the mole who provided many of the secrets the Russians obtained from the Manhattan Project (and who served as a prosecution witness against the Rosenbergs), wasn't assigned to the project until 1944. There were of course other spies, and infiltrating before a project starts most definitely does count, but I felt like going after the factual error.
Re:I am for full disclosure but... (Score:2)
However, you do seem to be forgetting the Tubealloys project and Klaus Fuchs, who was involved from 1938 and was one of the first from the British team to transfer information to the US. And to the Soviet Union as well, although that wasn't known at the time.
Similiarly, research on military applications of fisson, and attempts to suppress knowledge of that research, occurred before the Manhatten Project was officially started (which actually happened pretty late in the game).
sPh
Re:I am for full disclosure but... (Score:2, Insightful)
1. This is the vulnerability of our Nuclear Piles
2. This is where you can cross the border undetected
3. This is how to make a Fake ID
Should be told to people who are responsible for the security and administration of Nuclear Piles, Border crossing, Fake IDs. In the computer world
(ie some large part of the computer using world uses windows so full disclosure is good in that situation)
Re:I am for full disclosure but... (Score:2, Insightful)
"This is the vulnerability of our Nuclear Piles"?
If there is a nuclear pile on the desktop of every home, then yes.
"This is where you can cross the border undetected",
If there is a border on the desktop of every home, then yes.
"This is how to make a Fake ID?"
If photo ID's are checked to allow access to the desktop of every home, then yes.
Hope this answers your question.
Re:I am for full disclosure but... (Score:5, Informative)
This is where you can cross the border undetected [house.gov]
This is how to make a Fake ID? [counterfeitlibrary.com]
Well maybe I didn't say every single tiny little syllable but basically I said em, basicly.
Re:I am for full disclosure but... (Score:2)
It's not quite a perfect analogy (Score:1)
Well, actually, (Score:2)
But you analogy is seriously flawed. Governments, like all beaurocracies, strive first and foremost to avoid bad publicity and/or responsibility for their actions. That's why openness, accountability, and yes -- full disclosure are important. There is always a gray area in terms of giving the relevant corporation/agency advance notice and some limited exceptions for national security.
But you need not worry about the balance tilting too far. The CIA might publish a guidebook on torture, but it wouldn't publish a guide on getting a fake ID/passport. Hence it's so rare for teenagers or illegal aliens to get any fake documents at all.
Re:I am for full disclosure but... (Score:3, Interesting)
The disclosure should be done by people who identify the vulnerablities. If you know where you can cross a border undetected, you ought to let someone know. Particularly in that case, the hole would probably get closed pretty quickly. And if some random person notices a hole, it would be pretty easy for someone actually looking for a vulnerability to find it.
For example, if in August (or before) someone had said to the general public something like, "You can probably hijack an airplane with legal objects and then destroy a building with it", the passengers wouldn't have let the hijacking get anywhere, and the hijackers probably wouldn't have tried. There's obviously the risk that some groups that wouldn't have thought of it would get the idea, but it would have gotten fixed in policy before anyone could do anything to exploit it.
Re:I am for full disclosure but... (Score:2)
Good luck getting the desired results. Even after the fact people are still complaining about how the increase in airport security is mostly cosmetic. Not to mention the fact that if you are overheard even mentioning the word "bomb" in an airport, you are likely to be detained for a while. (This was true even before recent events...)
The point is, people are always coming up with ideas, but the policy makers, and the people in charge simply don't have the desire, resources, or whatever to act on very many of them. How does suggesting a possible vulnerability in airport security motivate the responsible person or persons to actually implement a change?
Re:I am for full disclosure but... (Score:2, Interesting)
would you extend these arguments to support it in non-virtual security?
Yup.
Should the CIA and other international organizations use full exposure? Should they publish something titled, "This is the vulnerability of our Nuclear Piles"? "This is where you can cross the border undetected", "This is how to make a Fake ID?"
That's not quite the same. I no more expect the CIA to use full disclosure than Microsoft. Full disclosure is about third parties pointing out problems.
A better analogy would be "Should anyone who wants be able to publish things like, "Guide to Lock Picking [lysator.liu.se]"? Sure enough, you can find works on picking locks, defeating car and home alarms, hotwiring cars, making fake ids, and a host of other real world security issues. And these works are good things. Individuals affected by these risks can use this information make their own judgements on how to protect themselves.
Re:I am for full disclosure but... (Score:3, Insightful)
You will note that if you read the article and this is probably the only time where "bug secrecy" is necessary, that is it extremely bad to publish a bug for non-fixable systems(like air traffic control computers). It is good in one sense that the exploit is known (so that they avoid it the next time) but it is bad to let it loose if the system is still deployed and can not be changed and aren't going away soon.
So the continue the allogy, it isn't good to disclose vulnerabilities of nuclear stockpiles because you can't fix them.
Re:I am for full disclosure but... (Score:1)
Telling "how to make a Fake ID" is very hard to distinguish from information that does get passed out about what the current best crop of fake IDs and counterfeit currency is.
Re:I am for full disclosure but... (Score:2)
Depends on the circumstances. See below.
Should the CIA and other international organizations use full exposure? Should they publish something titled, "This is the vulnerability of our Nuclear Piles"? "This is where you can cross the border undetected", "This is how to make a Fake ID?"
To the general public? That would serve no beneficial purpose what-so-ever. To qualified people or professionals who may be able to help withn the problem at hand and/or counter the exploit? Youbetcherass they should. If they refuse to fix any of the problems discovered, in a reasonable amount of time, the whistle should be blown on them in full. The problem comes with the "Qualified Professional" part. IMHO, Culp does have a point (and Schneier seems to agree with me) that dangerous tools need to be kept out of hands that can and will do damage as much as possible. Would you just give a loaded gun to an angry child? (Turnabout is fair play, dude.)
Some sort of professional org should be set up that distributes PGP keys (or some other security system) only to people who show they have the qualifications and need to access exploit and/or exploitable code. Then tools could be written that only are sent via secure, encrypted channels to those with the right keys - and hopefully kept out of the hands of script kiddies.
And before you go off singleing out and bashing Microsoft yet again, remember all systems [slashdot.org] can have potentially dangerous and destructive security flaws. We need to do this as an industry, including everyone and anyone - even those in the industry [microsoft.com] we, ummm, have a few problems with.
Soko
You are in luck (Score:3, Insightful)
Wow, what a troll. The CIA being an "international organization" is a dead give away. The other is the fantastic false analogy between buggy PC software and nuclear bombs. No orgainization currently mass produces nuclear weapons for daily use on every desktop. No one here would recomend such things.
At the same time, some countries like the USA, recognize that free thought is needed for scientific development and that full disclosure and broad education are in the public interest. While the particular techincal details of how to build bombs is kept secret, the physical priciples are trumpeted and encouraged. Indeed public debate on priciples are encouraged as free dicourse leads to knowledge. "Freedom is the ability to say two plus two is four, all else follows", said George Orwells sad character in 1984. While the Department of Energy and their employees might not tell us details, they will not keep you or me from talking about it. With sufficient study at any good US University, a person can learn all they need to know about bomb design. Knowledge is not yet viewed as evil. The truth will set you free and only the free can be sure they know the truth.
M$, Adobe, RIAA, MPAA and other private interests are going a step further than cold warriors with their "information anarchy" campaign. Such blatant censorship is un-American and against the public interest. They will be defeated in the long run, as will trolls like you.
Re:I am for full disclosure but... (Score:1)
OT: I saw an ad attached to the article say "When you're thinking Microsoft Windows XP, think AMD Athlon XP." Kinda makes me want to by an Intel.
Re:I am for full disclosure but... (Score:1)
This reminds me of a story Richard Feynman tells in Surely You're Joking, Mr. Feynman about when he was working at Los Alamos. It was a sensitive project, and there were security flaws; insecure locks, people not locking up their research, a hole in the fence.When he pointed them out, he was generally ignored, and told to get back to work. So he took to pointing them out in funny, difficult to ignore ways. Retrieving people's notes for them when they were at meetings, walking circles around the guards (going out the approved, gated way, coming back in through a hole in the fence), until something was done about them.
Point is, people responsible for security don't like being told they've made a mistake, and sometimes you've got to make sure they can't just tell you to sit down and shut up, whether in the real or virtual world.
Companies like MS want to keep security issues out of the public eye because it's cheaper and easier to sell the public on features than it is on security. Their motivation is marketability and sales. So if secure software is important, we've got to make sure security issues have lots of exposure. It's the only way to motivate them.
Re:I am for full disclosure but... (Score:1)
On the other hand, if a reporter discovers some huge security flaw. Should they be allowed to report it? An ethical reporter would notify the agency in charge before publishing. This would give the agency a head start to fix the problem. Just like most people who find security flaws contact the vendor before announcing the bug (unless it's found "in the wild" as in crackers are already using the exploit).
There are certain cases where a reporter probably should sit on the story, but most likely if a reporter can find out, so can the "bad guys". It's probably far better for the government to fess up and fix the problem (or be aware that the problem exists).
The former director of the Dept. of Transportation kept trying to get security tightened at airports before eventually resigning over the issue. No one wanted to spend the $$ to increase Airport security. In this case disclosure didn't help.
There are probably dozens of cases like this. If it's hard to get things fixed when the problem is published, think how hard it is to get them fixed if no one knows.
Re:I am for full disclosure but... (Score:2)
If the airport security company is not doing it's job, I want to fucking know about it, and I want to know exactly what they're going to do to fix that prior to me ever setting my ass down in an airplane seat again.
It's about security, which flows from trust, which flows from accountability. Nobody got fired after September 11th. I think that's a big fucking problem. Did anybody at Microsoft get fired after CodeRed? That's also a BIG fucking problem.
This might make sense, *if*... (Score:2)
... your job is to look after the Nuclear stockpile, if you are a border guard, or if you have to check passports as part of your job.
As much as you OY YAY FREE SPEECH YAY proponents would like to babble on, it doesn't matter about these other things. It isn't your damn business.
I run a computer, yes, I need computer security information. But no, I am not a border guard.
Microsoft's answer to Full Disclosure (Score:5, Funny)
Just another example of how Microsoft listens to and responds to customer requests. Have a nice day!
Sometimes you should shout "Fire" (Score:1, Redundant)
Re:Sometimes you should shout "Fire" (Score:5, Funny)
(A) Shout "FIRE!" and get crushed in the panic.
(B) Walk out quietly...who cares about anyone else?
(C) Tell your closest neighbor and hope that they're a fireman.
(D) Pour on gasoline so everyone will get out faster.
Re:Sometimes you should shout "Fire" (Score:1, Funny)
Re:Sometimes you should shout "Fire" (Score:1)
However, if you resemble a human being (much like myself) you can't help but watch the pretty flames burn...
Re:Sometimes you should shout "Fire" (Score:2)
Out of context (Score:1)
In the article, the quote serves as reminder that there are times when free speech needs to be curtailed. He is not suggesting it as a metaphor for the entire situation.
The article is riddled with this sort of straw man fallacy.
Beware of the "Fire" argument (Score:4, Insightful)
The argument that you can't just shout "fire" in a crowded theater entered the law in Schenck v. United States [findlaw.com], 249 U.S. 47, 52 (1919). This was a Supreme Court case concerning whether the government may suppress pamphlets encouraging people to resist the draft. Although I think that case may have been correctly decided (with the distinction being expressing opposition to the draft versus encouraging people to violate the draft law), I wonder if the Court realized they were treading on, or near thin ice, when they used the "Fire" analogy.
So it is with people who use the analogy today. Whenever someone start comparing some kind of speech to shouting "Fire" in a crowded theater, don't get carried away by the emotional appeal but keep an eye on your rights, lest someone try to make off with them.
The best ``fire'' analogy I've seen (Score:2, Funny)
> By analogy, this isn't a call for
> people for give up freedom of speech;
> only that they stop yelling fire in
> a crowded movie house.
Another wonderful analogy!
Security professionals have been yelling "fire" in crowded movie houses for years. Most of the actual patrons fail to pay any attention, despite the fact that the seats are made of explosively flammable materials, the management allows patrons to smoke cigarettes in the theatre, and occasionally the movie is interrupted by ushers dousing patrons with fire hoses if they are noticeably ablaze. Patrons who do catch fire are not offered a refund, nor a credit for those parts of the movie that they miss, nor even so much as an apology.
--- Zygo Blaxell (zblaxell, feedme.hungrycats.org)
Grace Period (Score:5, Interesting)
Grace Period
Purpose: Give users a reasonable interval during which to protect their systems against newly reported vulnerabilities
- Begins with public notice of vulnerability, and lasts for 30 days
- Is immediately curtailed if vulnerability becomes actively exploited
Do I read this correctly? Does this mean that when an exploit is shown to exist in the wild, then they immediately switch to "full disclosure" mode? This means that there is now an incentive to put an exploit in the wild: it means you can publish your work. Even if you leak the exploit surreptitously.
I know I must be preaching to the choir here, but, this seems exceedingly stupid. Am I missing something?
Re:Grace Period (Score:3, Insightful)
If you're a responsible researcher who discovered the exploit, your work will eventually be published upon the release of a patch.
The reason, I'd assume, that "full disclosure" mode is enacted upon seeing the exploit be out in the wild is to put some fire under the ass of those responsible to get a patch out. It hightens the level of urgency. I think this makes sense actually, since in most cases a patch will be released during the grace period (theoretically) before the exploit is actually seen in the wild.
I was actually going to propose a grace period as a "solution" to the problem, before I realized Microsoft was pushing for a grace period. I'm not fond of the month long period though, I'd expect it to be more like a week and a half to two weeks. Having hack-able boxes sitting open for a month when someone out there knows how to get into them is irresponsible. Giving manufaturers two weeks to get themselves together before the script kiddies come full on though seems like a good idea to me.
Re:Grace Period (Score:3, Insightful)
Re:Grace Period (Score:3, Insightful)
No. It means that if there is a known exploit in the wild then it is legitimate to post information about the vulnerability that it pertains to.
Let's say for a second that I'm a network administrator (which I have been) or in a related position. Would I want to know about how someone will be able to break into my network or servers? You bet I would. What if it was possible to avoid being affected by the exploit by changing default settings or shutting down services temporarily? I think whatever inconvience that might cause would be outweighed by keeping my network secure.
Obviously you haven't had to deal with this sort of stuff before. I'd suggest you do a quick search through the Bugtraq archives [securityfocus.com] for informed discussions on vulnerability disclosure. In the information security world it's a topic which has (almost) been flogged to death.
Re:Grace Period (Score:1)
If you do want to follow his plan (which is a good starting point, if not perfect), it's fairly clear what his intent is.
Your article points out a poor paradox called "False Start". Basically, a runner charged with starting early claims that obviously the race had started - there was already somebody running.
Re:Grace Period (Score:3, Interesting)
How exactly do they know if the vulnerability has been exploited? A box owner may not realize they've been exploited, and even then may not know the exact exploit used. What are the chances of this information getting back to microsoft before boxes #2-#200,000 are exploited?
Second, think of the attitude this takes towards customers: They won't give full disclosure until one of their customers is compromised? Sounds like a hostage sitatuion to me.
And, for the obligitory "if microsoft was a car company" comparison:
Partial disclosure: "one of the 4 seatbelts in your car can fail. Don't worry, there is a 80% chance that its not the seat you're sitting in."
Full disclosure: "Don't sit in the rear passanger seat until you get the belt replaced."
Would you like your car company to say not give full disclosure for 30 days or until someone died?
...find other ways to protect their customers... (Score:2, Insightful)
blurring out...
What Culp actually said... (Score:4, Insightful)
"Most of the security community already follows common-sense rules that ensure that security vulnerabilities are handled appropriately. When they find a security vulnerability, they inform the vendor and work with it while the patch is being developed. When the patch is complete, they publish information discussing what products are affected by the vulnerability, what the effect of the vulnerability is... and what users can do to protect their systems....
"Some security professionals go the extra mile and develop tools that assist users in diagnosing their systems and determining whether they are affected by a particular vulnerability. This too can be done responsibly...
Re:What Culp actually said... (Score:1)
The key difference between what Culp suggests and the right way is that with Culp's approach there is no real incentive for the vendor.
The responsible way to release vulnerability info is to warn the vendor first, letting them know that in a week or so the advisory will be made public. That way the vendor is forced to act. Scott Culp left out the part about the time limit.
Re:What Culp actually said... (Score:1)
I believe the real problem is getting the incentive to the admins. In the case of Code Red, the patches were available for a long time, but admins didn't pick them up.
I believe that MS is currently sufficiently motivated (though I don't know how well they'll be able to patch the dam).
The biggest problem is the chain of command. Currently the only one that really works is-
1. Exploit -> CNN -> PHB -> Bad techie
and that needs to change.
Re:What Culp actually said... (Score:2)
As far as getting admins to actually patch things, I think that's best left up to the market. If a company repeatedly suffers because their admins aren't patching their machines properly, then maybe they should get new admins.
Re:What Culp actually said... (Score:1)
The process that needs to be fixed here is getting admins/users to implement patches immediately shrinking the "Window of Exposure". MS's fear of bad PR seems to outweigh its concerns about the security of its clients.
Re:What Culp actually said... (Score:2)
What sensible security researchers do is warn the vendor in advance, then wait a "reasonable" time for the vendor to answer. What "reasonable" is up to the researcher, and generally depends on how big the hole is, how likely it is an exploit to be already in the hands of script kiddies, etc.
If the vendor doesn't answer timely (at least a non-automated "gotcha, we're checking this out") then it's disclosure. I'd say that here "timely" is pretty short - a few days at most. After this stage, usually there is a time for fixing the hole, or at least providing a work-around until a patch can be released. This phase can last (empyrical evidence from reading BugTraq) from a few days to a few weeks. Then either the vendor prepares an announcement, or the researcher does.
This is not perfect, sometimes mails get lost, or external pressure gets the better of good judgement, or whatever else. However, this manner of acting gets everybody time to understand what's happening while keeping the "vulnerability window" as tight as possible.
What is different from Culp's statement? That the researchers and not only the vendors get to decide what "appropriate response time" is, so critical knowledge doesn't get stranded in somebody's mailbox until marketing says otherwise.
About releasing proof of concept code responsibly: either such code works or it doesn't. Some professionals deliberately put a couple of syntax errors in their exploits, so that a completely clueless script kiddie can't just fetch them and use them. However, it only takes one clueful script kiddie to release a working version of the exploits. Unfortunately in this particular case it's either black or white, I see no chance for greys.
Fire (Score:2)
Slam.
Re:Fire (Score:2, Insightful)
Bruce's continuation of the analogy is to show that this simply doesn't work, because those yelling "security hole!" are doing it because there is, in fact, a security hole.
One thing microsoft is good for... (Score:2)
Computer/network/internet security issues have been around a long time; perhaps now it will be more of a factor in management decision making.
Re:One thing microsoft is good for... (Score:2)
He pegs it with this: (Score:2, Insightful)
Perhaps it was pointed out that codered et al had patches a month ahead of time.
But, in the same breath/stroke it was mentioned by MS that their meathod of informing, distributing about patches/vulnerability was/is "confusing".
And the article by Culp almost says in effect "we don't want vulnerabilities known so we can stop writing patches and bugfixes or do it when "we" feel like it".
The whole "rely solely on the vendor" schtick is coming full circle it seems.
The author pointed out that is the way "it used to be" and it seems Microsoft is pushing for it to be that way again.
Re:He pegs it with this: (Score:1)
Share the vulnerability with the press (Score:2)
2. Write code to exploit the vulnerability.
3. Arrange with an industry journalist to demonstrate the exploit.
Then it comes down to MS PR vs. journalistic integrity.
P.S. Don't even THINK about doing this unless you're cool with MS buying all the trade rags...
technet security slight (Score:1, Interesting)
http://www.microsoft.com/technet/treeview/defau
That innocent little list o' worms (Score:5, Insightful)
-------------
All your sig are belong to us.
Re:That innocent little list o' worms (Score:1)
1) "list of microsoft viruses" - slashdot answer "hahaha microsoft sucks"
2) "list of viruses" - slashdot answer "wtf? linux doesn't suck like that"
I'd say he went with the right choice. Plus he does work for microsoft afterall.
Re:That innocent little list o' worms (Score:3, Insightful)
Re:That innocent little list o' worms (Score:2)
Please do. Remember we are talking worms here and not trojans or virii. I'd be very interested in hearing your list.
Re:That innocent little list o' worms (Score:2)
b) they're usually run by people who have better things to than care about their operating system's security.
And unfortunately written by that same sort of people.
Even PR is exploitable...heh (Score:1)
Ouch...
and referring to the Culp article again, with the DMCA in effect, it is a lot easier "to shut ppl up about MS's vulnerabilities than it is to fix them.
OOOoooo...that really hits home.
Regardless (Score:2, Insightful)
While it is certainly up to the vendor to release as bug free code as possible, I disagree with his exoneration here. "If you don't know how to use it, don't" holds true regardless of what OS we're talking about. A Unix sysadmin that doesn't patch his/her boxe(s) is as much to blame as an MS sysadmin who fails to do so as well.
Whether or not the amount of exploits for IIS are a direct result of how widely it is used outside of the "heavy metal" internet server arena is anybody's guess. But to even suggest that the sysadmins should say "oh, fuck it. It's the vendor's fault" is a bit like putting one's network in the hands of God... maybe it will be OK, and most likely it won't.
Re:Regardless (Score:5, Informative)
Windows patches and hotfixes are a whole world of pain. SP2 for NT4 erased filesystems. SP6 crippled people running Notes. Hotfixes regularly blow each other away. They're a *mess*, and a good Windows admin will be *very* cautious about applying either hotfixes or service packs for NT/W2K/XP because the QA on them seems to be so low, so often.
Re:Regardless (Score:2)
This flies in stark contridiction with my experiences playing with the kernel in Linux, where a simple errant pointer can wreck an entire Make. There is some benefit to having the source code available; but in this particular instance, less may actually be more for those who, like myself, don't want to have to check hundreds of lines of codes to fix an LPR vulnerability.
That's not to say NT's Hotfixes are foolproof, but there is a reason Microsoft has finally put the automatic update feature into place with Windows XP. They are confident enough that people won't be turning on their systems one day and having them crash due to an update being installed overnight. And from my experience, this hasn't yet occured in Windows XP.
Re:Regardless (Score:2)
Yeah, ruining a *whole make*. That's awful. Just as bad as hosing entire filesystems.
And it's a good thing you didn't, too, since one of the reasons NIMDA caught some people unawares was a case where IIS would keep switching indexing server, and hence vulnerabilty, back on under certain circumstances with software updates.
Are 2K fixes, in general, better than NT4? Sure. Are XP ones better? Who knows, it's hardly had time on the market for problems to occur. But they're still a mile away from the Unix/BSD/Linux world (although it appears Apple are going to drag the rep of the BSD world down...).
But quite frankly, anyone who auto updates their server, of any class, is a fucking moron.
Re:Regardless (Score:2)
Says the overzealous UNIX nut who wore a dress to his own wedding [diaspora.gen.nz]. I'll pass on your judgment calls, thanks.
Re:Regardless (Score:2)
Oh, wait, actually that was just an ad hominem attack and a link to a picture of a guy in a kilt.
Re:Regardless (Score:2)
You know what they say about men wearing kilts...
Full Disclosure analogy (Score:1)
It is about lighting a "fire" under a vendors ass.
Perhaps so Culp does not forget this point he should take the advice in another story and "tatoo it on his butt" if he needs to.
And not in invisible ink, btw.
IIS is an appropriate acronym. (Score:1)
It Isn't Secure.
How apropos.
biology/environment (Score:2)
So how does Microsoft survive? Is it a virus?
Full Disc. everywhere BUT the computer industry (Score:2, Insightful)
For example the auto-industry. If you buy a new/used car and it is a lemon or has massive faults that can cause serious damage the vendor is expected to state those faults [ftc.gov]
I have two children and ANYTIME there is even the slightest risk of problems with the products we have bought for them, the vendor says don't use it any more.
You would think that Microsoft would have learned from Firestone/Ford....
Re:Full Disc. everywhere BUT the computer industry (Score:2)
Unless you were the person responsible for security and the PHB goes for blood.
Software liability and disclosure (Score:3, Insightful)
Almost every piece of commercial software you install these days has something in the license like (taken from the Red Hat legalese):
"There is no warantee for the program, to the extent permitted by applicable law. Except when otherwise stated in writing by the copyright holders and/or other parties provide the program "as is" without warranty of any kind, either expressed or implied, including, but not limited to, the implied warantees of merchantability and fitness for a particular purpose. The entire risk of as to the quality and performance of the program is with you. Should the program prove defective, you assume the cost of all necessary servicing, repair, or correction."
Now someone explain to me why, when software vendors disavow all responsibility for their products, they should be granted some special status with regards to information about those products' misbehavior.
Disallow Liability Disclaimers (Score:2)
In the current model, even with full disclosure, the most they risk is sales loss due to bad PR, and to modernize the old saw, "nobody ever got fired for buying Microsoft".
subliminal messages (Score:2, Funny)
It's not just what he says; it's how he says it. For some reason, the above sentence makes me think of a particular vendor.
Thus guy Culp's name... (Score:2)
Since Mr. Culp is Microsoft's appoligist, might his title at MS be Mea, that would make his full title ther Mea Culpa?
Or, since they have found MS guilty of being a Monopoly, would that make this person in charge of culpablity for MS?
ttyl
Farrell (running, ducking and hinding...)
can someone explain (Score:2)
I'm not interested in arguments about open-source systems, or how vendors should be liable for bugs, etc...
I simply want to know why it makes sense to publicise the code for a vulnerability as opposed to saying "there a bug in this area, we're working on a patch". What are the benifits?
I wonder: should we send Osama Bin Laden precise instructions for making Anthrax, Small-Pox, or Nuclear Weapons?
Re:can someone explain (Score:2)
that may well be true, although it could also be argued that there may actually be a diminished threat due to less widespread knowledge. however we cannot rule out the possibility that malicious users may discover the exploit simultanously and use it before the patch is available. so you're right: it is important that the patch become available ASAP.
but the fact remains that the onus is on the vendor to provide the patch, and I think that the main driving force behind them fixing it quicly is that they must assume that some malicious user has discovered it because if they fail to provide a patch in a timely fasion and the vulnerability is (err) exploited then they will lose respect in the marketplace, which, since they're closed-source, is of upmost importance.
i believe that the market, above anything else, compells closed-source vendors to provide these patches regardless of whether or not the exploit is well-known, so i don't think that this represents a significant benifit of Full Disclosure.
Re:can someone explain (Score:2)
again, it's in the vendor's interest to support its customers as well as it can. they cannot afford to lie about 'vaporware' vulnerabilities or fail to provide effective fixes.
i think my analogy stands.
Aye, there's the rub! (Score:2)
> Since full disclosure has become the norm, the computer industry has transformed itself from a group of companies that ignores security and belittles vulnerabilities into one that fixes vulnerabilities as quickly as possible. A few companies are even going further, and taking security seriously enough to attempt to build quality software from the beginning: to fix vulnerabilities before the product is released.
And Microsoft doesn't like fixing problems, let alone building quality in from the start. Those activities don't add anything to their bottom line; it's a waste of resources.
Microsoft doesn't like the new norm, therefore it doesn't like full disclosure. (Where's the surprise?)
To say nothing of the bad PR that hits the world's presses twice a week when the latest MS-specific exploit shows up at the disclosure site.
The threat (Score:2)
The real threat is someone who goes looking for security holes, finds them, and quietly uses them to steal information or money. It's the people who are stealing credit card numbers, bank account info, and military information that are threats. Serious attackers will often work to obtain inside information, and may be willing to combine physical attacks with computer attacks.
Vulnerabilities left open but not publicized open doors for the real attackers. Non-disclosure shuts down only the more inept script kiddies.
See also Richard Frono's article (Score:2, Informative)
(also on The Register [theregister.co.uk]).
Bruce Schneier to speak in Minneapolis (Score:2)
"The Natural Laws of Digital Content" on November 15 at 7:00 at the University of Minnesota Minneapolis campus.
The subject of the talk is related to the topic of this story - how legislation such as DMCA interact with computer security issues. So if you're interested in this topic and live near Minneapolis click the link above to find out details about this talk.
Also, we [faircopyright.org] hope to tape Bruce's talk and put up video and audio of the talk on our web site at a later date.
valid alternative to full disclosure (Score:2)
I think that in such a world, software quality would improve dramatically, and software manufacturers would be at least as motivated to fix bugs as they are in a world with full disclosure.
Look at this quote from Culp's piece... (Score:2, Insightful)
"Providing a recipe for exploiting a vulnerability doesn?t aid administrators in protecting their networks. In the vast majority of cases, the only way to protect against a security vulnerability is to apply a fix that changes the system behavior and eliminates the vulnerability; in other cases, systems can be protected through administrative procedures. But regardless of whether the remediation takes the form of a patch or a workaround, an administrator doesn't need to know how a vulnerability works in order to understand how to protect against it, any more than a person needs to know how to cause a headache in order to take an aspirin."
This is Microsoft's opinion in a nutshell: Don't worry about the details, we'll take care of you. That doesn't surprise me for end-users, but for administrators? When I see a bug announcement with a detailed example, such as the ftp_conntrack bug in iptables, it is tremendously advantageous to actually understand the bug and how to deal with it. In that case, several workarounds suggested themselves, because the bug only afected RELATED connections.
Now take the MS paradigm: I wait until they release a patch, or detailed instructions which I should follow by rote. Of course, I am affected by the vulnerability longer; furthermore, I get no transferable knowledge from the experience. Next time there's a similar bug, I just have to wait, again, instead of being able to invent a workaround.
Sure, it's _possible_ to implement a workaround when I don't understand the vulnerability, but I sure feel a lot better when I understand the problem AND the solution. I simply don't understand how this MS scheme (where everyone is an unenlightened end-user, waiting for cryptically-named patches which they don't understand) could appeal to any business OR home user. By assuming that even its administrators are unqualified to do manual reconfiguration by themselves, or even really understand what they're doing with the OS, MS has effectively crippled their fleet of administrators. And this, ultimately, is why the NT(2k/xp, whatever)platform is the huge, gaping security hole it is.
I simply can't believe the arrogance and stupidity of the statement above.
"...an administrator doesn't need to know how a vulnerability works in order to understand how to protect against it, any more than a person needs to know how to cause a headache in order to take an aspirin."
I think that speaks for itself.
Re:Errors.. (Score:1)
Re:Errors.. (Score:1, Offtopic)
He's. Contraction. [he is], [he] possessive.
Which is it? "he is written"? And don't say "He has", that's not a contraction!
Now let's discuss your dangling participle...:-P
Re:Errors.. (Score:1, Offtopic)
-Paul Komarek
Re:Errors.. (Score:1, Funny)
They re-did PowerPoint?!?! k-l33t man!
oh - wait - they just did a PowerPoint presentation.
Re:Schneier Understands Crypto (Score:2)
Well that is Bruce for you, he is kinda random. A while back he published a 'Schniergram' listing a whole rack of problems he had identified in IPSEC. Then after the group explained to him why he had entirely failed to understand the problem he didn't withdraw the paper, but it did disappear from the index on the counterpane site and kinda faded from view. Every so often someone reads back issues of cryptogram and rushes to the list to debate the issues raised by the 'expert'.
So when it comes to false alarms Bruce is not exactly whiter than the driven snow.
The balance between full disclosure and partial disclosure is very hard to draw. The problem is in large measure often on the side of the vendors. But security 'experts' are not always exactly blameless. Quite often the exploit scripts are written by people who have no connection with the discovery of the bug and after it has been acknowledged and is being worked on.
The basic problem is that the easiest method of getting press attention is to claim credit for the discovery of some security bug or other. 'Full disclosure' is often no more than a convenient excuse for being a media-whore. Those of us who are responsible for actually designing security systems do not in general spend much (or indeed any) of our time returning journalist's phone calls with nifty quotes.
Counterpane conflict of interest (Score:3, Interesting)
Re:Proof that Full Disclosure is the ONLY way to g (Score:2)
The latter technique, of course, worked admirably on flight 93, reducing losses by at least tens of millions of dollars, and possibly by billions. (If they'd been a little luckier they could have reduced the flight 93 loss to nearly nothing.) Flight 93 didn't rely on a single gov't action : private individuals and companies closed the information loop and then attempted to counteract the threat, while the gov't response had barely started. There are lessons here on how to build a civil defense infrastructure to better handle the future attacks.
Information security attacks are just as expensive as the direct costs of the 9-11 attacks, costing billions of dollars a year in direct financial losses (and billions more from disclosure of sensitive information). The only difference is that infosec attacks are diffuse and don't draw much attention, while an equivalent military attack is spectacular and extremely photogenic. (Attacking the first WTC tower was a military action. The second was a publicity stunt designed to increase indirect losses.)And don't anybody tell me that it's a poor comparison, that computer viruses don't cost lives and how can I be so insensitive. Suppose infosec attacks cost each American an average of one hour of their time each year. (Which is probably within an order of magnitude of being correct.) That's a total loss of 250 million man-hours. Assuming that the total work a person can do is 150000 hours/lifetime, that's 1700 human lifetimes squandered by infosec attacks each year. And that's not considering attacks against military and medical databases, and against industrial equipment, which can and do directly kill people.
People who think there cannot be an "Electronic Pearl Harbor" are in for quite a surprise, just as people who thought foreign affairs don't affect the modern American lifestyle were surprised on 9-11. Most current guerrillas lack the competence to carry out severe infosec attacks, but ignorance and religion are not necessary prerequisites for anger and extremism.