Hackers Disagree On How, When To Disclose Bugs 158
darkreadingman writes to mention a post to the Dark Reading site on the debate over bug disclosure. The Month of Apple Bugs (and recent similar efforts) is drawing a lot of frustration from security researchers. Though the idea is to get these issues out into the open, commentators seem to feel that in the long run these projects are doing more bad than good. From the article: "'I've never found it to be a good thing to release bugs or exploits without giving a vendor a chance to patch it and do the right thing,' says Marc Maiffret, CTO of eEye Security Research, a former script kiddie who co-founded the security firm. 'There are rare exceptions where if a vendor is completely lacking any care for doing the right thing that you might need to release a bug without a patch -- to make the vendor pay attention and do something.'"
Nothing... (Score:4, Funny)
Re: (Score:1)
Re: (Score:2, Insightful)
Government Oversight (Score:1, Insightful)
There needs to be a law against releasing exploits without giving the comapny time to react to the find.
Perhaps there should be a software developers association that a company can join that handles oversight on this issue. Any "hackers" that find a critical bug with a piece of software could bring it to the association's attention, and there could be sanctions if the developer refuses to
Comment removed (Score:5, Informative)
Re: (Score:3, Funny)
Re: (Score:1)
Ummmmmmmmm, did I miss a memo or something?
KFG
Re: (Score:2, Insightful)
If the bug can be quietly fixed without harm to the public then the developer is given time to fix the problem. If there are several exploits in the field then the general public is warned and a fix is made as soon as possible. The commission would have the power to envorce a law preve
Re:Government Oversight (Score:5, Insightful)
But the most egregious examples of "Find security flaw -> Issue patch -> Wash, rinse, repeat" are found in programs (Sendmail? Bind, anyone?) or operating systems (Windows
Consider the OpenBSD approach, where security was a priority from day one, and the excellent track record they have in this area, and contrast it with Microsoft's track record, where only marketing was a priority from day one. The only way this will change is when it is no longer profitable to place such a low priority on security, and the two ways you arrange that are by demonstrating that the current situation is an arms race that is not sustainable, or, by waiting for a day when Grandma and Joe Sixpack care about computer security enough to refuse to buy anything that doesn't deliver it. Personally, I find the first option to be far more realistic, and it also helps to avoid the "only two choices" dualism that I keep seeing everywhere (especially in politics... "Democrat vs. Republican", "Left vs. Right", "With us or Against us") that is suffocating real change.
Re: (Score:2)
Yes lets.
One has made a very successful product and made lots of money, One has produced a probably vastly superior OS that nobody uses. Windows might be bag of shit but in terms of the aims Bill set out to achieve (Getting filthy rich) it is a runaway success.
Re: (Score:2)
You're making my point for me, actually. That Windows accomplished Bill's goals does benefit Bill, but it does nothing for me and nothing for your average user who has a Windows installation. For the 99% of the population who are not Microsoft employees an
Re: (Score:2)
I'm not sure that that is actually true. I'd say the Internet has been somewhat popular since around 1996. Thinking back over the last 10 years, I think that there's been considerably fewer "crippling" viruses and such as of late. Maybe the current arms race is actually petering down.
"by waiting for a day when Grandma and Joe Sixpack care about computer security enough to refuse to buy anything that doesn't deliver it."
I
Re: (Score:3, Insightful)
We would still be using Paper Tape loaded through an ASR33 Teletype
Seriously though,
Exposing bugs like this is (IMHO) a pure FUD stunt. Ok, tell the vendor about the bug and if they don't fix it in a reasonable time (variable depending upon severity etc) then by all means publicise the problem in order to get some pressure on them to fix it.
But getting Officialdoom involved? You are a prime candiate to be sectioned. Civil Servants the world over can't organise their way out of a pape
Re: (Score:2)
Here, fixed that for you.
Comment removed (Score:4, Interesting)
Re:2 months (Score:5, Interesting)
Where as there may not be a patch to solve the problem, but perhaps there is a significant work around that could avoid some trouble.
This is exactly why it is difficult to assign a window of disclosure to such issues. Not too terribly long ago, some of the larger firms managed to get together and settle on a 30 day notice.
Also, you might also remember that a little company called Cisco was sitting on a vulnerability for quite a while until someone when psychotic over the deal.
In the grand scheme of things it comes down to protecting your image. It almost seems like the policy on vehicle recalls. Unless X number of issues arise... just don't deal with it. However, if it becomes substantially used or finds the public eye... it suddenly becomes a much larger problem.
Honestly, an arbitrary date is rather inflexible and a system that takes in effect the impact of the bug needs to be used. Pump out tons of crap software? That isn't exactly the problem of the common man, but rather the problem of the organization's software development model.
Organizations and individual people lose time and money to support these industry bug shields. Again, a case by case determination depending upon the level of potential harm.
Re: (Score:3, Insightful)
Wow. Do you have any evidence whatsoever to back that claim up? Or did you just see it on IRC somewhere?
Back in reality, it is almost universally assumed that published exploit for which no patch exists will lead to much more damage than a published exploit for which a patch is widely available. In fact, it is so obvious (to almost everyone but you) that such a study has never even been perf
Re: (Score:2)
Where do you get your data and how do you know an uncovered exploit is not being actively used.
YOU DON'T...
Exactly at what point did I say research materials must be placed immediately? I didn't did I? That wasn't a mistake as I wasn't advocating the release of exploitable vulnerabilities immediately.
You failed to read my post, you failed to interpret what little you did read and ultimately gave off a gunshot reaction to some thought you forme
Re: (Score:2)
Here's an example:
Which is going to cause more damage:
a) releasing a contagious pathogen into a population before a vaccine has been developed and distributed
b) releasing a contagious pathogen into a population after a vaccine has been developed and distributed
I don't
Re: (Score:2)
The difference between full disclosure and just informing the companies would be a lot closer to just telling the goverment and the WHO about bird flu cases but not telling any of the general public.
Still that oversimplification falls wide of the mark.
Its a generalisation but probably true that if you need to make an analogy of something in order to understand it (or explain it to a 3rd party) then you (or that 3rd
Re: (Score:2)
Avoiding all analogies, the best mathematical model I can come up with is
Re: (Score:2)
Your equation makes intuitive sense, but it doesn't model an important factor - the number of people who know about the vulnerability is not nearly as important as who they are. Here are some classes of people, and how disclosure affects them:
Skilled black hat hackers: These people are often
Re: (Score:2)
SKs and BHs are the people who do damage with vulnerability knowledge (VK). There are basically two cases when a researcher is considering publication.
1) the researcher is the first to discover the VK.
2) a small number of BHs already have the VK.
For #1, practicing full disclosure does a lot of damage, no question.
For #2, practicing full disclosure may decrease the time to patch (giving the small number of BHs slight
Re: (Score:2)
Computer software today is, on the whole, extremely insecure. We may never perfect it, but it could be a whole lot bet
Re: (Score:2)
s/it's/"responsible disclosure" is/
Re: (Score:2)
Re: (Score:2)
It's common sense...
The longer a problem persists the worse it will become.
Re: (Score:2)
A security bug is only a problem when someone knows how to exploit it. While no person knows how to exploit it, it is not a problem, and no problem is persisting.
Re: (Score:2)
It's not severe if no one knows.
The window could be that much larger then two months.
The sliding window works both ways. Longer if it's not a problem right now, but if it's activelly being traded and used... it becomes a much larger problem.
Re: (Score:2)
Yes the window works both ways. That is why the word "reasonable" is used. Two months is considered reasonable for most types of software. And don't even try to condescend. This conversation has provided me with no insight. I make a career of this stuff.
Re: (Score:2)
Which leads to the suspicion that it's all not a technical but a public perception problem, hence a marketing issue. Which leads me to think we should disclose as early as possible to give the manufacturer some good spanking because after all, it's them who are responsible for the issue, not hackers, and not security folks.
How's that
Re: (Score:2)
I'm not sure I really advocate holding a proverbial gun to someones head. I'm just not much of an activist in that regard.
Maybe not a threat so much as a response rating? Surely tracking data on responsiveness would yield long term value in addressing these problems. Couple that data with the line item fixes and vulnerability time lines as well as threat values should show a negative or positive history with regards to quality assurances.
Honestly, I'm sure something like this has to exist already doesn't it
I don't intend to be gentle (Score:2)
Frankly, I don't intend to be gentle. Manufacturers ignoring security problems (or delaying fixes) for purely economic reasons aren't gentle either, and produce a lot of work for those who have to live with the crap, i.e. systems administrators. I'm not an activist either but I work as an IT consultant having to listen to all these stories, following escalations, and listing to manufacturer's managers who try to t
Your second month doesn't matter. (Score:2)
As soon as the patch is released, the crackers will be checking the files replaced and the differences in those files.
They can usually get and exploit fielded within 24 hours or less.
The best you can do is to take steps to minimize the threat and log any activity so you can see if you've been cracked. Running snort can tell you if any traffic is suspicious.
But you should be doing that anyway.
Opinion Swing? (Score:5, Insightful)
A few years ago when Microsoft started pressing for "responsible disclosure", they were pretty much mocked and ridiculed by everybody.
I'd like to think that there is now some real discourse on the effectiveness and responsibility of full disclosure vs responsible disclosure, and that security researchers are choosing responsibile disclosure more often.
I'd prefer to think of things that way then to cynically surmise that this is simply a case of "when it's an MS bug, let's roast them with a 0-day disclosure, but if its anyone else, let's give them a fair shake at fixing it"
Re: (Score:2, Interesting)
Re: (Score:3, Informative)
http://www.microsoft.com/technet/security/bulletin
If you search "microsoft.com" for "responsible disclosure", many of the recent security bulletins list who repor
Re: (Score:2)
MS should have a program whereby if you tell them first and let them patch it, they'll give some program or hardware (Zune?) to the first reporter of the bug, but if the exploit is released (by anyone) to the wild before the patch, then the offer is null and void. Assuming MS would play fair (and not have an insider leak the bug 2 hours before the patch), seems fair and easy good business for MS. Surely the cost of a Zune or a laptop would be less than the bad press costs.
Meh. There'd be those people who would just want to piss in the pool and publicly release the exploits pre-patch just to fuck over the guy who told Microsoft.
Re: (Score:2)
I'd prefer to think of things that way then to cynically surmise that this is simply a case of "when it's an MS bug, let's roast them with a 0-day disclosure, but if its anyone else, let's give them a fair shake at fixing it"
To be fair, one of the main points to consider is how the vendor has behaved historically. If you submit bugs to a given vendor and they completely ignore them, sometimes for years, until that bug is made public, then no good is likely to come of your discovery until the bug is made
Re: (Score:2)
Well, what if they have difficulties or other reasons that make it unlikely they are going to fix it? In other words, what if they don't care about your hammer? Then disclosure just insures that it is out there to be used as a weapon against humanity at large.
Of course, this assumes that you (a) care about "humanity at large" and (b) might be caught in the destruction as well. You know, if you like actually used Windo
Re: (Score:1)
Re:Opinion Swing? (Score:5, Informative)
Re:Opinion Swing? (Score:4, Insightful)
I think such a "right" (I would call it an "entitlement", actually) really only makes sense if there's a reasonable expectation that general purpose computing in a networked context is safe and secure to begin with.
Given the true nature of computer networking today, far from having "rights", I'd say that software consumers have responsibilities: To avoid networking except with known good components; to develop their own software in-house so that they can better control the vulnerability testing and patching process, to conduct their own testing to their own standards on third-party software; and to not pretend that all their security problems are the responsibility of the third-party software vendor, easily solved by the vendor simply writing perfect software.
Re: (Score:2)
And how do they know that they are using good components of no one can tell them otherwise?
" to develop their own software in-house so that they can better control the vulnerability testing and patching process"
Ahh, so anyone who uses a computer need to write their own OS and applications? Has to ahve complet understranding of software and hardware engineering?
" to conduct their own testing to their own standards on third-party software; "
And be a maste
Re: (Score:2)
By accepting the responsibility to test the components themselves, or else admit that they can't reasonably expect the components to be secure.
Re: (Score:3, Insightful)
Components are only part of the story.
According to the Orange Book (the DOD manual for evaluation of trusted computing systems), the security of a machine can be rated no higher than the rating of its least trusted port. That includes everything, including the power cord and the air. A truly high security system demands a generator inside the same Fermi cage as the device itself, and probably armed guards at the door.
The internet is an untrusted
Re: (Score:2)
The point I was trying to get at is that software users do not have a right to the information discovered by other people, regarding the security of the software they're using. Rather, software users have a responsibility to gather their own information, either by investing in information-gathering activities in-house (my idea), or by formally contracting with a third party, and investing resources that way (your idea).
Either way, I think my basic point still stands: if you w
Re: (Score:2)
Face it, a company that doesn't know how to review its own security also doesn't know how to rate the reliability of a security contractor. That gives rise to a whole class of snake-oil vendors for whom FUD is another word for 'marketing'.
Case in point: I think it was McAffee that came out with a white paper last year saying that Mac users Really Should Use AV Software, despite the fact that the software in question only catches bugs for which it has known profiles, and there are
Re: (Score:2)
Re: (Score:2)
easily solved by the vendor simply writing perfect software.
Ah yes, the old strawman, haughty and nattily attired. The entire industry knew that Microsoft's integration of IE, Outlook, and ActiveX was a terrible misstep, and Microsoft knew this too, from the moment of first inception. It was a competitive decision to damn the torpedoes and endure the consequences in the aftermath (i.e. by mounting a massive PR campaign to promote responsible disclosure after the barn door was open). "Might makes right" w
Re: (Score:2)
1. Software buyers are entitled to truth in advertising from software vendors.
2. Software buyers are responsible for securing their own systems, regardless of whatever lies the software vendors may have told them.
3. In fact, in the current state of networked computing, it is unreasonable to assume that a given piece of software is secure, regardless of what the vendor claims. Therefore, it is inappropriate for software buyers to blame software vendors for insecurity in the user's
Re: (Score:2)
I don't think that "altruistic" means what you think it means. People who use a piece of software, or operate in an environmen where people do (or share globally internetworked systems with people who do) have a personal, vested interest in rational, thoughtful disclosure. That's not altruism, that's enlightened self interest. It's selfish, in the correct, useful sense of that word, to do it right. The people you're worried about aren't the opposite of altruistic
Re: (Score:2)
So disclosure is supposed to be the hammer over the vendor's head to "make" them fix it?
If that is the only way to get them to fix it, yes. Several times that I know of bugs and even demo exploits were publicly released after researchers gave up on waiting for the vendor to ever fix it, or even respond saying they would fix it.
Well, what if they have difficulties or other reasons that make it unlikely they are going to fix it?
The point of security research is to promote security. If a vendor is unwi
Re: (Score:2)
Check the OpenSceneGraph development list archives if you don't believe me.
Re: (Score:1)
Sensible people should debate "how long is long enough", but I think it insane to fully reveal dangerous expoits directly to the public without providin
Re: (Score:2)
If the responsible disclosure rules are well designed there is no reason to treat any vender good or bad differently.
I disagree. If I find a big vulnerability and submit it to the vendor my next action depends upon the vendor. Some bugs take longer to fix and I won't necessarily know what a reasonable amount of time to wait is. If one vendor has a good track record and e-mails me back right away to say that they are working on it in it will take them 20 days, I'm inclined to wait at least 20 days before
Re: (Score:2)
They were mocked because they made a mockery of responsible disclosure by trying to keep the bugs they were informed of quiet rather than trying to fix them. I don't think there's much of an opinion shift; there never were that many people who advocated releasing exploits into the wild before informing the vendor as a courtesy. A few, sure, but always a minority. But it is
Re: (Score:1, Informative)
BTST too often.
What was it about IE being unsafe for 281? Utter bullshit. I've compiled a list of crtical bugs that remained unpatched since 2004, hence IE was never safe since then. They are publicly known, Microsoft knows them,
Re: (Score:2)
That indicates that at least some of the responsibly disclosed bugs get fixed, doesn't it?
I understand the consternation about unpatched IE vulns. Unfortuneately I don't know off the top of my head what the real story is.
Re:Opinion Swing? (Score:5, Interesting)
There exists a community of underground hackers (crackers?) who search for exploits. They find them, trade them, sell them, and use them to steal data and resources. Gone are the days where script kiddies just hack for fun; there is a serious black market involved, since resource and identity theft can be very lucrative.
When an exploit is discovered by a researcher, it is likely that the black hats have already discovered it. The software's users are already being harmed, although they may not realize it: smart hackers are good at covering their tracks.
In this scenario, "responsible disclosure" is anything but responsible. By waiting until the vendor has patched the software, users are being harmed. On the other hand, immediate full disclosure has three important effects:
One, it eliminates the black market for the exploit. If everyone knows about it, nobody will pay for it. This reduces the overall market for exploits and, compounded over many exploits, will drive hackers out of the market. If it is not profitable to find exploits, fewer people will do it.
Two, it gives the users an opportunity to take action. If, through full disclosure, I find out that Internet Explorer has a serious security risk, I can switch to Firefox. If my Cisco router has a problem, I may be able to work around it with an alternate configuration. On the other hand, if a researcher reports the exploits to Microsoft and Cisco directly, black hats are free to exploit my computer and my router until patches are released (if they ever are).
Three, it provides an incentive for vendors to write better software. If every software bug meant a black eye and angry users, you can be sure that there would be better software. On the other hand, the occasional well-timed patch looks like software "maintenance", a concept that shouldn't exist but sounds reasonable to the layman (after all, he has to have his car tuned up every so often, so why not his software?) The result of full disclosure, on the other hand, is more akin to an emergency recall; the producer has clearly made a mistake.
The concern, of course, is that the black hats don't already have the exploit, and that full disclosure gives it to them. Yes, this is the risk of full disclosure. However, given that black hats have an economic incentive to find exploits, while researchers rarely do, we can expect the probability of this to be low. And even if they don't have the exploit, releasing it still shrinks the exploit market (why pay for exploit B when you can get exploit A for free), it still notifies users of a potential problem, and it still incents vendors to write better software.
Full disclosure is responsible disclosure.
Four - one you conveniently forgot... (Score:2)
Instead of 500 companies silently being hacked and having some of their data stolen, 5 million people, including companies, are now under attack through the latest combination of script kiddie worm + dangerous hack.
So yes, it is irresponsible to just throw the data out there - because you vastly
Re: (Score:2)
I'm totally in this camp myself. The only thing responsible disclosure accomplishes is perpetuating the market for software that was written badly in the first place. Consider companies Rock and Scissors. Rock decides to push their product to market first at all cost. Scissors elects to create a development culture that promotes rigorous coding practices. Well, we all know how this story plays out: formation of a rebel Paper alliance. Then the Paper people are accused of being irresponsible for suffoc
Re: (Score:2)
Basically the value of non-disclosed vulnerabilities will shoot through the roof compared to non-patched but fully-disclosed vulnera
Re: (Score:2)
Meanwhile on OSS, we don't have to worry much about this at all most of the time, and still have no government organization or stupid laws making it that way.
I figure I'll be modded down into oblivion, but what the hell.
Re: (Score:2)
If they sit around drinking coffee for 190 days then start work on it, that's bad. If it takes them 190 days to get it right, at least they're working on it.
There's another story along these lines.... (Score:2)
http://news.com.com/The+good+and+the+bad+of+bug+c
imagine the job interview (Score:2)
Wow (Score:1)
Re: (Score:2)
I guess now he's just another 'trusted source' in the security biz, hmmm?
Re: (Score:1)
The problem is... (Score:2)
One problem (Score:5, Insightful)
There actually is a middle ground.
Some say, "Hey, these vulnerabilities exist whether they're reported or disclosed or not," just as MOAB says in its FAQ. But the problem is that they overlook the practical side. Sure, the vulnerabilities, and maybe even working exploits, exist, but as long as they're hoarded (and not used) by very small and tight-knit groups of people, they're not getting actively exploited in the wild across massive userbases. Could high value 0day exploits perhaps be used for isolated penetration? Sure. But could they be used (for any period of time) for a mass-spread worm or other malware? Nope. It'd be hours before security firms and/or vendors identified the issue.
So when you choose to disclose previously undocumented issues before giving the vendor any chance to respond, which some claim they're doing to improve security, there is a greater chance of exploit across a much wider base of users, which can have a much wider and catastrophic impact. Some say that as a sysadmin, they'd want to know about such vulnerabilities so that they can protect and mitigate themselves. But other than for high value targets and corporate or government espionage - which can perhaps have their own channels for "earlier" disclosure when identified by entities like US-CERT or Information Assurance agencies - I don't see how people can reasonably expect to be targeted by extremely valuable and as-yet-undocumented vulnerabilities. It's a point of pride - and sometimes money - to sit on such vulnerabilities.
The bottom line is that the vendor should always be informed in advance, if there is any real concern about security on the platform, and not just ego stroking or slapping down "fanbois". How long in advance and how long a vendor should be waited on is somewhat subjective, of course. Also, no one's saying that an "independent" "security researcher" is beholden to a corporate interest. But then they shouldn't operate under the guise of responsibility or the feigned notion of wanting to "improve security", when some persons' mechanisms for disclosure are nothing more than PR attempts, or another notch in the bedpost (hmm, or probably NOT a notch in the bedpost...)
Oops, you're completely wrong (Score:2)
Apple did NOT give anyone any order, court, legal, threat, or otherwise, with anything having to do with the wireless issue.
In fact, your entire timeline, and nearly everything in your post, is completely wrong.
Brief summary:
- The exploit was shown at Black Hat and demoed on a MacBook with a third party wireless card first. Apple was NOT informed of the issue.
- The issue affected numerous 802.11 chipsets, drivers, and multiple operating systems, in
Re: (Score:2)
At very least, the update closed a set of malicious-header holes Apple discovered independently, during an internal audit it started in response to the firestorm of "is-there/isn't-there a hole?" speculation caused by the Krebs story.
The holes Apple closed may or may not be the same ones Ellch and Maynor claim to have exploited, since as far as I know,
THIS JUST IN... (Score:2)
Depends on the nature of the bug (Score:2)
Re: (Score:2, Informative)
It really depends (Score:2)
1) Make real attempts to release secure software, rather than just ship shoddy software as fast as they can onto the unsuspecting public.
2) Have a serious method for responding to issues quickly and effectively when they are found outside the company. This really just means good customer support combined with a good method of patching shipped code safely and effectively.
3) Treat security researchers as friends who help improve their products.
F
Re: (Score:2)
If you're going for private disclosure, do so as anonymously as possible.
Re: (Score:2)
Ya, because small women and young children deserve to be crushed to death for riding in the front seat...
Of course, if someone were to suggest something likely to have a stastically significant impact on yearly motor vehicle deaths, like say mass transit, that'd just be inconvenient...
I guess mass transit would be akin to simply not using products from software vendors with poor security track records. In that vein
Re: (Score:2)
In that vein I suppose airbags are about as useful as trying to turn bad vendors into good ones with nothing more than bug disclosure practices.
Sounds like you don't think that full disclos
Re: (Score:2)
As to other analogies... I've yet to hear of a medical utensil causing injury because it was too well sterilized.
Shaping up bad vendors?
Nuanced (Score:2)
All's fair... (Score:5, Interesting)
Hackers are not under any obligation to disclose anything. I'm not aware of any law that either forces them to disclose a vulnerability that they have discovered, or any due process that must be followed to do so. I'm also not aware that writing or distributing proof-of-concept code is illegal. Judging by the number of large software vendors either in court (IBM, SCO) or deliberately misinterpreting existing legal documentation (Microsoft and Novell attack the GPL), the law is clearly the only deciding factor in how business will be done in the IT industry.
Therefore, throw your morals and principals out the window. This is laissez-faire economics at it's best. Mud-slinging, sabotage, legal wrangling, death threats and more await as we determine just who has the best software. If these vendors are truly interested in some good-faith reporting from the people who are discovering the vulnerabilities, maybe a show of good faith on their part might be nice. There's absolutely no incentive to do anything in a reasonable or "nice" way, when dragging a hated vendor's name through the mud is both legal and cool.
There's a few things I can think of that would improve matters and reach a common ground where truly malicious software is written only by a few bad apples:
Just to be perfectly clear: I am condoning the MOAB and any other MOxB. I've used too much bad software and seen too many vendors be held utterly unaccountable for their pre-meditated actions against the consumer. Lobby groups funded by these large vendors continue to erode consumer rights. If this is not how business is to be done, perhaps the industry leaders should set a better example.
mandelbr0tNo *legal* obligation (Score:2)
Also, why give them a pass because they're MOxBing select vendors? Wouldn't it be be
Re: (Score:2)
Re: (Score:2)
It's easy to say "bad software" from your point of view, but what exactly IS "bad software"?
Sorry, I wasn't very clear about this. I did mention that the key point was the misrepresentation of software for the purpose of gaining some control over the victim's computer. As a result, Microsoft would mostly be on the right side of the law, but a company like Gator would not. The difference is in what the software is purported to do. I don't even know what Gator is supposed to do. It doesn't really matter, since it's primary purpose is to send personal information back to the software creator. In th
Hackers also disagree on... (Score:2)
easy (Score:2)
That's how and when.
I already talked about this. (Score:3, Insightful)
In one of my previous posts, I have already talked about this.
Companies have no other interest or goal other than to make money. Fundamentals people, fundamentals! If you think, for one second that an idea from any company not resulting in immediate profit is correct, you are a fool. They cut corners, discriminate based off of accredited and formal education rather than will and raw expertise and experience, they implement managment schemes that do more harm than good for the sake of book keeping for VCs and shareholder confidence. They have to make every judgment off of a cost analysis report. And what few people understand is, if it's cheaper to continue in the same path, they will even if people are dieing (car manufacturers) or getting screwed (Microsoft software unreliability).
I can't believe this debate is taken seriously! The Companies want this precedent, because it's cheaper to ignore most exploits than to actually have to hire someone that can do something to better the software. Companies want this because it adds another variable (in their favor) to the cost analysis of fixing a problem... it gives them choice. And as we all know, from Companies' own assertions, that choice is bad and force is the only thing applicable. Companies don't give you much of a choice, why should you give them any? Open Source doesn't get a choice, why should their competitors (proprietary software). If Capitalism is the so-called "best", then it should be able to compete in the exact same fashion and prevail as other systems. So don't do this double standard crap of "Oh, if it's a company software, do 'X' if it's not, then do 'Y'; only because of a benevolent precedence suggesting you should give a Company a break while it's OK to lay hard and firm on some other ideology."
If a Company releases software that is buggy. The very instance you find an exploit, it should be released to the public with all that you have researched including example exploits. If the Open Source community can fix it quickly, then surely Microsoft or Adobe can too with their all-mighty Capitalist ideals and absolutely-necessary 'management'....
There is no precedence here. It is not a debate. You paid for the software, and if you don't get what you paid for (and some), then you should have absolutely NO qualms of sticking it back to the person who pawned it off to you. If they are so great, then let them prove it. But they aren't, and that's why they are coming up with all these little social tricks trying to get people to make an exception to further propogate the illusion that proprietary software is "good" the "best money can buy" or what ever.
You paid for the software. It's yours. You got screwed. Let people know! If you got screwed at the used-car lot, you'd let your friends know the details... you'd even feel socially obligated to do so. Software is NO different. You are socially obligated to blow the whistle for every little thing you find, and blow it till you're blue in the face; you paid for it, and you didn't get what you expected. It is NOT illegal to blow the whistle on crappy products you end up paying for. In fact, for some products it's a federal offense to pawn off crap to the consumer (think Lemon Laws in the United States). If you really want to get technical, then there already is legal precedent set in this regard because it's illegal to sell a car that is reasonably too problematic in the United States. Maybe we should make it illegal for software Companies to release crappy and overly buggy software too!
If you find an exploit. As soon as you can write up a concise report, sample code et al. and hit the "Send" button. DO IT!
The more foolish fool (Score:2)
This stems from the simplistic notions that a lot of engineers have that companies are just like giant computers, programmed only to maximize money creation.
Over time, you will come to realize as I have that companies actually are composed of many PEOPLE. Companies do nothing by themselve
Re: (Score:2)
The more foolish fool is one who believes as you do - that all companies have simplistic goals, that are not at all influenced by human behaviour.
The Janitor might have these human emotions you speak of. The Secreta
Re: (Score:2)
If a Company releases software that is buggy. The very instance you find an exploit, it should be released to the public with all that you have researched including example exploits.
Why? If there is no known exploit in the wild and without the information I have is unlikely to be one, why should I make things easier on malware authors. If the vendor has a history of quickly fixing reported vulnerabilities, how does it benefit me to undermine the security of my own system in that way? Full disclosure migh
Re: (Score:2)
What if it's a new car and a wheel keeps falling off? ...ultimately I think the responsible thing to do is release the exploit immediately.
What if it is an old car and one of the locks sticks sometimes, when it is cold out? The problem I have with your argument is that you're trying to argue in favor of always taking an action based upon a situation that is an extreme. Sometimes it is ideal to release info immediately and sometimes it is not, depending upon many factors. Not all companies need poor press
Here's the bibliography link again (Score:2)
You had me at... (Score:2)
You must release the information ASAP (Score:2)
Or any of a number of dirty tactics.
Sorry, but even the briefest look at the history of corporate attitude indicates that they can not be trusted. This goes baco to corporation before America even existed, not just American Corporations. One reason Why I agree with Ben Franklin when he said that the constitution should ban corporations.
One chance... (Score:2)
1) The problem is discovered.
2) The problem is reported to the vendor, the report including a fixed resonable date for either a fix or the date for the final fix (to allow for tough fixes). The time alotted reflects the severity of the problem - more severe results in less time.
3) When this fixed date or the vendor date (if given) is reached, the problem is disclosed regardless, complete with POC exploit if available.
This method forces vendors to take security r
Re: (Score:1)
Oh, sorry. Just had a flash back to those boy scout days when they would hand out those little patches, I mean badges, BADGES, for being able to set things on fire and tie up your little bother in the name of first aid.
Re: (Score:2)
I used to tie up things and set my brother on fire.
Monkeyboi