Full-Disclosure Wins Again 122
twistedmoney99 writes "The full-disclosure debate is a polarizing one. However, no one can argue that disclosing a vulnerability publicly often results in a patch — and InformIT just proved it again. In March, Seth Fogie found numerous bugs in EZPhotoSales and reported it to the vendor, but nothing was done. In August the problem was posted to Bugtraq, which pointed to a descriptive article outlining numerous bugs in the software — and guess what happens? Several days later a patch appears. Coincidence? Probably not considering the vendor stated "..I'm not sure we could fix it all anyway without a rewrite." Looks like they could fix it, but just needed a little full-disclosure motivation."
A bug only exists... (Score:4, Insightful)
Re: (Score:3, Insightful)
Incorrect. A bug exists if a bug exists. A bug only gets fixed if the public knows about it, specifically the computer savvy segment of the population, since the average user can't tell a bug from a feature.
Re: (Score:2)
I thought wrong.
Re:A bug only exists... (Score:5, Insightful)
Sadly, we live in a world where most people in power actually believe that anyone who points out problems is just as bad as someone who causes and exploits problems.
Re: (Score:3, Insightful)
Anyone who points out problems, is creating a problem.
A lot of times, if you don't officially know about it, you don't have to officially do anything about it.
Re: (Score:2, Insightful)
At least he went to the company first and sat on it for a while. Lots of people publish first, then notify the maker. That definitely makes him a white hat in my book.
Re: (Score:2)
This all comes down to responsibility, and the simple fact that people got into positions of power by avoiding taking any responsibility for things that have gone wrong. All the people at the top of most governments and organisations are masters at avoiding responsibility.
Re:A bug only exists... (Score:5, Funny)
GIRL: But--
NARRATOR: Once and for all!
Re: (Score:1, Funny)
IMHO, knowing about specific software flaws is an advantage to everyone but the company that makes the software with the flaw. The people in power only "think" the way you describe because they get their power from the same companies that loose out when someone finds a flaw with that companies software.
Hand a policeman a $20 bill and help you ar
Re:A bug only exists... (Score:5, Funny)
He fell into the sarchasm.
Re: (Score:1)
Re: (Score:2)
Sure, but the patch buys them time so that they can fix the actual hole. Usually a hole is a sign of a bigger problem, and certainly any developer would want to re-write vulnerable sections to close the hole up permanently. Of course the other issue is with the development cycle; if you're coming out with a new version of the software, do you really want to invest that much time in re-writing the old code to eliminate the bug. Probably not. You'd want to patch the hole and then make sure it did not recur in
Re: (Score:2, Funny)
does it have to be turned into law? (Score:5, Insightful)
Why then, should software be any different? Do we have to force companies to take action once a bug is submitted to them?
B.
The difference (Score:4, Insightful)
Re: (Score:3, Interesting)
B.
Re: (Score:2)
I don't thi
Re: (Score:2)
Many software products contain many bugs. Because software systems contain so many source code lines, you are almost certain that there are bugs. This is especially true if languages like C++ are used to create relatively undemanding applications. Many upon many of these bugs will never show up, if they were, they would have been discovered during testing. And if they show up, they may not do much harm. For example, a memory leak or buffer overflow in a graphics application won't matter too much.
Yeah, that's the reason the highly important security-bugs do not exist: if they were important, they would have surfaced during the testing phase.
Re: (Score:3, Insightful)
Buffer overflows and such tend to not surface in normal applications, because you would have to go out of your w
Re: (Score:2)
That means for software companies, that they should put more manpower on coding beyond the outer shell. Well thought-out interior functions are just as importa
Re: (Score:2)
Re: (Score:1, Interesting)
Software failing isn't necessarily harmless (Score:5, Insightful)
Its possible to be injured in ways other than just physically. What about fraud and identity theft? It could be very damaging to thousands of people if one of the software applications that your company is using has flaws that allow fraud or identity theft to occur on massive scale.
To quote "Going Postal" by Terry Pratchett: "You have stolen, embezzled, defrauded, and swindled without discrimination. You have ruined businesses and destroyed jobs. When banks fail, it is seldom bankers who starve. In a myriad small ways you have hastened the deaths of many. You do not know them, but you snatched bread from their mouths and tore clothes from their backs."
Theres a reason why fraud and theft can have as harsh a punishment as assault. (In Canada at least.)
Maybe EZPhoto Editor isn't going to put anyone at great risk if it fails, but I'm sure you could think of some software that might.
Re: (Score:2)
*I'm not implying you live in your car, just for the sake of argument.
Re: (Score:2)
Certainly that would qualify as a life at risk.
Re: (Score:3, Insightful)
Re:does it have to be turned into law? (Score:4, Insightful)
why drag lawyers and the government into this? (Score:1, Interesting)
a) change was needed
b) public was unaware
c) individual wanted change
d) individual alerted a portion of the public
e) change was made.
No lawyers, no State, no violation of freedoms, no taxes, no fines.
Call me sceptical (Score:3, Interesting)
If so, that is some AMAZING response time. But I would venture a guess that they had already been working on the corrections. The public posting may have made
Adopt the cryptographer threat model (Score:5, Insightful)
In the threat-models used by cryptographers, the attacker is assumed to know everything except cryptographic keys and other pre-defined secrets. These secrets are small in number and small in size. Their size and their limited distribution means we can trust protocols based on these secrets.
Software that is used by millions of people is the very antonym of a secret. Compiled source is routinely reverse engineered by black hats. Web-sites are routinely attached using vectors such as SQL injection. In short, you can't assume that any of the source code is secret. Taken to its logical conclusion, you must therefore assume the worst; that the black-hats know of far more bugs than you do. In fact, strictly speaking you assume they know every bug that exists in your software.
In light of adopting such a severe threat-model, the argument over full disclosure is a non-debate. Black-hats with sufficient resources probably already know of the bug. The only people aided by disclosing it wide and publically are the people who run the software who can take evasive action. In contrast, you only told black-hats what they already know.
Simon
Re: (Score:1)
Re:Adopt the cryptographer threat model (Score:5, Insightful)
But that's a ridiculous assumption! It makes sense in the context of cryptography research, but you're turning it into a assertion that publicizing software vulnerabilities doesn't have any negative consequences, which is absurd. There *are* two genuine conflicting sides here and you can't just wave one of them away.
Re:Adopt the cryptographer threat model (Score:5, Insightful)
It's a ridiculous assumption until you try to work out how you can usefully weaken the assumption! Ask yourself this, how do you know how good the attacker is? They're not going to share their successes with you, in fact, they will probably never make contact with you.
You are only as strong as your weakest link but with the vast distribution that's possible this days you have to expect to be up against the very best attackers. So what then is the plausible attacker your meant to be up against?
Incidentally, this is why cryptographers choose such a harsh threat-model in which to place their protocols and ciphers. Only by designing against an attacker who is effectively omniscient can you truly get security. You need to look no further than Diebold to see what happens when you don't do this.
Sure in the real world, disclosing vulnerabilities has an impact! Of course it does, but to say it decreases the security of the users of the software is simply nonsense. It may well do in the very short term, but in the longer term it is absolutely vital that full disclosure occurs if security is to improve.
Simon
Re: (Score:2)
Yes, that'd be the entire point! When you're talking about the field of cryptography research that calculation is obvious. But users of software can't be expected to put up with increased vulnerability
Re: (Score:1)
How many? (Score:3, Interesting)
I can count at least 3, and I wouldn't be surprised if there aren't a lot more. Between only telling the company about a discovered security flaw and immediately announcing it to the entire world is a whole range of possibilities. To name a few:
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
somewhat reasonable post I came across who could actually see that it's possible
to have some middle ground with the whole full disclosure issue.
Maybe I should have just posted "You must be new here" and basked in the +5 funny
Re: (Score:2)
Re: (Score:3, Interesting)
That's a bad example. (Score:2)
For example, read up on the ongoing attacks on AACS. The black hats (and yes, they are black hats) working on breaking AACS have exploited all kinds of software and hardware bugs and shortcomings in order to gather more information and cryptographic secrets. They have the upper hand because they are not fully disclosing their work. If they were to fully disclose the bugs in various tabletop HD-DVD players and software tools that they use to garner keys, you can bet that the problems would be fixed. As is, though, they are still ahead of the AACSLA.
I'm not sure I'd go so far as to say that. DRM is a poor example for any security model, because there's no real security there, just obscurity. In the long term, it doesn't really matter what the hackers release, because there's no long-term way for the AACSLA to stop them (well, aside from putting them all in jail, which is doubtless what they'd love to do). You can't give someone both enciphered information, and the key to the cipher, and expect them to not be able to combine the two -- that's exactly w
Re: (Score:1)
Re: (Score:2)
I can't remember if I turned the stove off when I left for work this morning -- I'd better call my neighbor and ask him to set my house on fire!
Require login, forbid any subdirectory access. (Score:5, Interesting)
Here's how I've solved this problem:
1) Modify the htaccess (or even better, the httpd.conf) files, so that ANY access to any of the subdirectories of the main app is forbidden. The only exceptions are: a) submodule directories, whose php files do a login check, or b) common images (i.e. logos)
2) The only way to view your files is through the web application's PHP file lister and downloader. This should be child's play for anyone with PHP knowledge: PHP has the fpassthru function, or if you're memory-savvy, use standard fopen. Make sure the lister doesn't accept directories above the ones you want to list, and for the files use the basename() function to strip them from subdirectories.
3) Any file in the PHP application MUST include() your security file (which checks if the user has logged in and redirects them to the login page otherwise). For publicly-available pages, add an anonymous user by default.
4) For log in (if not for the whole app), require https.
4a) If you can't implement https, use a salt-based login, with SHA-256 or at least MD5 for the password encryption.
5) Put the client's IP in the session variables, so that any access to the session from a different IP gets redirected to the login page (with a different session id, of course).
6) After log in, regenerate the session id.
7) Put ALL the session variables in the SESSION array, don't use cookies for ANYTHING ELSE.
I consider these measures to be the minimum standard for web applications. It shocks me that commonly used apps still fail to implement them properly.
Re: (Score:2)
Locking to IP address is a non-starter because there are ISPs who will rotate their visible IP range dynamically, so that user A might appear to be coming from IP X on one request, and from IP Y on the subsequent request. Then that's user's screwed
Re: (Score:3, Informative)
Re: (Score:2)
I wonder about using unique keys for each pass. That is, during logon, the client and server pass random numbers back and forth. Those are used to seed additional random number generation. If your request doesn't match up with the other's response, then your session has been hijacked and should be cancelled/redirected.
If the thief manages to sneak in with a valid set of numbers, then your next requ
Re: (Score:2)
Uh, you essentially just described SSL - except SSL is already a lot more thorough than this. About the only attack it is vulnerable to is man-in-the-middle, which isn't an issue with good CAs. Your session definitely isn't getting hijacked with SSL.
Re: (Score:2)
AOL doesn't count
Anyway, an option would be that at login, the user has the option to set a flag like "my ISP changes my IP randomly" (something like the login screens with the option "This is a shared computer"). Best of both worlds
For intranet sites inside a company, this is a non-issue, since all computers have a fixed IP.
Re: (Score:2)
7) Put ALL the session variables in the SESSION array, don't use cookies for ANYTHING ELSE.
There're a bunch of things you can do with cookies and still be on the safe side. As long as it's only for read access and something that's only visible to the user you could safe the time to lookup stuff in the DB, i.e. a Nickname or the current quota limit.
Also you can use mcrypt with Blowfish or AES for small chunks of data and store it in a cookie.
As long as you aren't storing to much data in cookies you can use the client as session based data storage - it even scales with directly with your clients -
Re: (Score:2)
This reasoning is whack.
you only told black-hats what they already know.
This is only true if you your assumption that the black-hats know about all of your bugs. Which is probably not the case. You are only assuming that they know all of the bugs as part of a threat model.
If a government said "The government of Evilland will try to get spies to find out our secrets. Let us assume that they have found out all of our secrets. The logical conclusion to this is that we should make all of our secrets available to all governments." you would think they were idiots.
Actually, that statements is stupid because you just push forward a fallacy. You should NOT make all your secrets available to all governments, but at the very least you should as a government take action based on the fact that the enemy knows everything about you.
It is the same for security and disclosure. You should assume that the blackhats know about the bugs (and in reality they do, not all blackhats but most probably at least one), then you can take decisions based on the maximum risk, not just a
Incentives (Score:5, Insightful)
Nothing like fear of losing sales and yearly bonus to motivate higher management.
Re: (Score:1, Insightful)
Depends What You Mean (Score:1)
Re: (Score:2)
It's debatable whether that's considered true, good-faith full disclosure. If you discover a security vulnerability, you are suddenly burdened with a moral imperative. You know that many people are in danger, but the people don't. Every day that you delay telling them is another day they're in danger, and quite possibly being exploited. It's important to remember that by denying people the knowledge of the insecurities in their systems, you
Re: (Score:2)
IMHO, the proper thing to do is to do a partial disclosure with selective full disclosure. Disclose all the details to the company. Disclose all details to CERT, who will issue a bulletin available only to certified vendors and will wait a period of time before public disclosure. Simultaneously post a publ
Re: (Score:2)
In a way, you're right. Full disclosure makes the cost to the company significant enough that it's a danger to the company's interests. This is as it should be - there's no reason to make selling insecure code less dangerous.
Everyone makes mistakes, including you.
Damn right, and when I make a mistake, I have to face the consequences. When I mess up, I do
Re: (Score:2)
Wow, that';s twisting reality. Which would you rather have: a 2% chance of dying today or a 100% chance of dying? Because that's the choi
Re: (Score:1)
The Government (Score:2, Insightful)
Re: (Score:2)
Re: (Score:1)
In the examples you cited the people who have exposure to those systems are motivated to see them succeed. I imagine the space shuttle would be easy to break if malicious individuals had access to it.
If ALL of the users of voting machines were motivated to see them succeed- what we have would work wonderfully. Unfortunately finding solutions that other people can't break when they are trying hard is not so easy.
Of course there is
Re: (Score:2)
Re: (Score:2)
We give the electronic voting makers a lot of crap for making insecure systems, and rightly so- knowing they're insecure, they shouldn't put them on the market to be used in something so important. But it's easy to forget that it really is a hard problem. The fact
Re: (Score:2)
Re: (Score:2)
It really isn't as trivial as
Re: (Score:2)
I know that the human element always exists, but it is drastically reduced if the person voting A. sees the card coming out of a sealed package into the machine and B. never actually touche
Re: (Score:1, Troll)
But the current admistration has held all their policy meetings in secrecy and has failed to provide disclosure of details of it's inner workings to congress even in numerous private sessions due to "executive privilege". Are you calling our great leader a slimy bastard ?
We don't even (Score:2)
Welcome back to the USSA!
Two basic problems (Score:3, Interesting)
They are just left in their ignorance with the potential for being exploited.
The "I want to do bad things" community has the information and is paying attention. Their community gets advance information before there even is a fix and they get to evaluate if it is worth their efforts to exploit it.
The other group that gets to benefit from full disclosure is the media. Starved for news of any sort, bad news is certainly good news for them.
All in all, full disclosure is simply blackmail. Unfortunately, no matter what the result is the user of the product affected gets all of the negative attributes. Their instance of the product isn't fixed because unless they are paying attention they don't know. They get to lose support if the company decides to pull the product rather than kneel to the blackmail. If the bug is exploited the end user get to suffer the consequences.
You can think this would justify eliminating exclusions for damages for software products. There isn't any way this would fly in the US because while we like to think we're as consumer-friendly as the next country, the truth is this would expose everyone to unlimited liability for user errors. Certainly unlimited litigation even if it was finally shown to be a user error which is by no means certain. And do not believe for a moment that you could somehow exclude software given away for free from damages. If you have an exclusion for that you would find all software being free - except it would be costly to connect to the required server for a subscription or something like that. Excluding free software would be a loophole that you could drive a truck through.
Re:Two basic problems (Score:5, Insightful)
First of all, if the users of the software aren't paying attention, who's fault is that ?
Secondly, you would think and hope that the software manufacturers would be paying attention and that they would inform their users, who may or may not be paying attention.
Full disclosure doesn't just imply disclosure to a small, specific group of people. It involves making information PUBLICLY available to EVERYONE. If someone isn't paying attention then that's their own fault. But if you don't feel like end users who are too worried with other things to be paying attention to Bugtraq are getting a fair break then point the finger at the software manufacturer instead. After all, they're the ones who sold faulty software and they're often the ones who continue to sell faulty software when bugs are not disclosed to the public, because they take the mind set of "what they don't know can't hurt them".
Unfortunately, what "they" don't know CAN hurt them. Because those same people you were talking about who are "interested in doing harm" are usually the ones to find the bugs to begin with. So they already know and those end users that you are so adamant about protecting are already at risk.
So IMO it's the responsibility of the software manufacturers to pay attention, fix bugs, release patches and inform their users that they need to apply said patches ASAP.
I mean, are you really advocating keeping information from people ? What if you had cancer, would you prefer that your doctor not inform you ? As I already stated, full disclosure is all about making information publicly available to absolutely everyone, so that absolutely everyone can make whatever choices they feel like with that information. Your argument is that full disclosure is selective about who it makes the information available to. I have to disagree. At the very least it makes the information available to the developers who made the buggy software to begin with, and competent admins who follow those lists so they know what kind of bugs are running on their servers (I used to be one of those).
Yep yep (Score:1)
Re: (Score:2)
Full disclosure results in announcing a bug not to the world, but only to people that are paying attention.
Yes, but the group that is paying attention includes the people with the greatest need to maintain security.
The "I want to do bad things" community has the information and is paying attention. Their community gets advance information before there even is a fix and they get to evaluate if it is worth their efforts to exploit it.
True, although sometimes this community already knows some of it.
The other group that gets to benefit from full disclosure is the media. Starved for news of any sort, bad news is certainly good news for them.
This is a good thing. First it informs the people. Second, it gives people a bad impression of vendors who have security holes and encourages them to move to more secure vendors. That's the free market improving security.
All in all, full disclosure is simply blackmail.
Nope. Offering to not release the vulnerability for cash is blackmail.
Unfortunately, no matter what the result is the user of the product affected gets all of the negative attributes.
Look, I'm a huge advocate of resp
Re: (Score:2)
Sorry, but you're simply wrong. Most full disclosure mailing lists are open to the public, therefore there are black hats reading those lists. The fact that the majority of people are legitimate researchers is not relevant. It only takes one black hat to put a nail in the coffin of full disclosure. Disclosing it on those lists is like posting a paper on the Internet about how to make a "cool" new kind of explosive that can't be detected by the airlines. Sure, the security researchers can find out ways
no, this is responsible disclosure (Score:5, Insightful)
Responsible disclosure allows responsible companies to get a fix before a flaw is used maliciously, but the researchers still get credit. With responsible disclosure everyone wins except black hats.
Full disclosure benefits black hats more than it does anyone else.
Re: (Score:2)
I agree with you for the most part, but... (Score:2)
- the timeline for full disclosure being given to the vendor (I don't know whether that did or didn't happen in this case), and
- reaching some mutual or community agreement on what a "reasonable amount of time to fix the problem" is for the problem in question.
That said, I definitely agree this wasn't "full disclosure", since the vendor was informed, but it wasn't necessarily responsible disclosure, either. To me, "responsible disclosure" implies that a patch is
Re: (Score:3, Informative)
No. Wrong. It's not a matter of opinion. With responsible disclosure, a security researcher notifies a vendor before publishing his research. It absolutely DOES NOT imply that a patch is made available before the researcher publishes his findings. A vendor is still free to shoot itself in the foot under responsible disclosure.
The only gray area is determining just how much time i
Re: (Score:2, Insightful)
Re: (Score:2)
I didn't say it implied that; I said, "To me, "responsible disclosure" implies that a patch is made available BEFORE the detailed disclosure of the vulnerability happens". And it is a
Re: (Score:2)
This is a contradiction. The phrase "to me" prepended to a factual predicate does not change the meaning of the statement. If you aren't a native English speaker, and I am misunderstanding what you mean to say, I apologize.
It is every vendor's dream to have security researchers work as free consultants, hand-holding them through fixing security problems. The reality is that researchers are under no obligation to do anything o
Re: (Score:2)
No, it is not. It means that is what is implied by "responsible disclosure" to me, which is exactly what I said. That isn't what it necessarily means to anyone, or that I think that's what it should mean to anyone, and I understand that.
That is what responsible disclosure means to me, and that is a valid viewpoint; it most certainly is a matter of opinion.
Tied to that, obviously, is the
Re: (Score:2)
The word "responsible" refers ENTIRELY to the researcher, not to the vendor. Any definition of full disclosure which depends on whether or not a vendor choses to act is therefore an invalid definition.
In your own cursory examination of articles and blogs, what term did you find the industry uses for disclosures in which the researcher gave a company advance notice of a publication, but not as much lead time as some wou
Re: (Score:2)
Only because "two plus two equals four" is a provably correct factual statement.
What constitutes responsible disclosure, in the context in which I was speaking, is a matter of opinion.
Therefore, it is perfectly reasonable that someone might say "To me, means
The word "responsible" refe
Re:no, this is responsible disclosure (Score:4, Insightful)
Black hats win too. You ask 4cId_K1LL3R whether he'd like you to "fully" or "responsibly" disclose the 0day buffer overflow that he discovered a week ago and has been using to break into systems. I'm sure he'd far prefer that you keep the public in the dark about the issue for a month or so while the company leisurely gets around to patching it.
Black hats win, but software companies win most of all - which, after all is why software companies invented and promoted "responsible disclosure" in the first place. "Responsible" disclosure allows a company to improve their reputation and their software at little to no cost, thanks to volunteers who fix their security problems without telling the public. This, in turn, enables them to continue using the same irresponsible software engineering practices as they always have, with no impact on their bottom line.
Re: (Score:2)
Additionally, most companies can't immediately implement work-arounds on the day of a 0-day publication. They have to wait until a patch is released from a vendor. In such cases, the black hat has the same amount of time to
Re: (Score:1)
Compiling and testing doesn't find everything. It's easy to accuse an ace coder or a crack team of programmers of sloppiness when you don't know the people. Sure some companies push an overly aggressive time frame, but not all of them do and (from what I can tell) not most of them
Re: (Score:2)
There's always an easy, immediate work around. Sometimes it's as trivial as adding a firewall or IDS rule, sometimes it's as extreme as physically unplugging the affected machines until the issue is patched.
If you're an institution with servers containing a lot of highly sensitive information, you'll probably be willing to do extreme things to protect your data if it's really, truly necessary. The problem is, you
The Beatles said it best - (Score:2)
How Software Works (Score:5, Funny)
2. Secretly, a team of crack programmers (or programmers on crack) develop the patch.
3. The patch sits in a repository until public outcry.
4. Public outcry.
5. Patch released... LOOK HOW FAST WE ARE!
Is it just me or (Score:1, Insightful)
Patchy code? (Score:2)
They might not have been lying. Fixing it properly might have required a rewrite, and instead they may have been forced to include a number of slapped-together kludges with Lord-knows-what side-effects under extreme time pressure. I know what kind of code *I* write when I'm under that kind of time constrai
False assumptions? (Score:3, Interesting)
Ironically, the full disclosure probably forced them to put out the solution before it was ready, leaving the risk of new bugs. IMHO, forcing a company to rush a fix is not the answer. If you work for a real software company, you know that today's commercial software often has thousands of bugs lurking, although many are very edge case and are often more dangerous to fix than not fix (esp if there is a workaround).
There should be enough time given to a company to address the issue. Some can argue whether or not 5 months is enough time, but that's a different argument. I think forcing companies to constantly drop everything for threat of full disclosure will end up doing more harm than good.
Re: (Score:3, Insightful)
If the company was indeed looking at the problem, then they lied about it. Their response to being notified of the problems, as described in the article, was to say "Gee, we're not going to bother fixing that. Instead we're going to work on a new product and just sell it as an upgrade to everybody."
When someone tells you flat out that they aren't going to do anything, why is assuming that they aren't doing anything false?
Re: (Score:2)
There seem to be some false assumptions here. It is assumed the company did not look at the bug and potential fixes until after it was "fully disclosed".
I don't think you RTFA.
He told them about the problems.
Their response: We're not fixing it because we have a new client coming up.
5 months later, no new client, so he went public.
If you read the other article linked in the summary, it seems like they could have trivially done a lot to secure things server side. Like not making the password hash file readable and not allowing user uploaded scripts to run on the server.
Re: (Score:3, Interesting)
I didn't see the original email he sent to the company. Nor did I see mention of followups to try and push
Several days later a patch appears. (Score:4, Insightful)
"Coincidence? Probably not considering..."
Yeah, everyone knows that patching security holes is an instant process. What other explanation could there possibly be? The public found out about the bugs, and the vendor waved a magic wand, and presto-changeo, they were fixed.
Okay, now let's be real here.
That the patch appeared almost immediately after is the surest sign that the vendor was already working on them. It probably also indicates the vendor wasn't confident that they were finished, and rushed them to get them out after only a couple days of public disclosure.
So enjoy your half-baked patch.
True economics (Score:1, Interesting)
The thinking seems to be something like this: when a bug is disclosed, blackhats that were unaware of the bug become informed and have a window of exploitation until the bug is patched.
This seems absurd to me. As soon as the bug is disclosed, users become aware and can immediately block the vulnerability. If there is no other solution, they could at least stop using the vulnerable software. So the window of exploitation is the
morality (Score:2, Funny)