Should Vendors Close All Security Holes? 242
johnmeister writes to tell us that InfoWorld's Roger Grimes is finding it hard to completely discount a reader's argument to only patch minimum or low security bugs when they are publicly discovered. "The reader wrote to say that his company often sits on security bugs until they are publicly announced or until at least one customer complaint is made. Before you start disagreeing with this policy, hear out the rest of his argument. 'Our company spends significantly to root out security issues,' says the reader. 'We train all our programmers in secure coding, and we follow the basic tenets of secure programming design and management. When bugs are reported, we fix them. Any significant security bug that is likely to be high risk or widely used is also immediately fixed. But if we internally find a low- or medium-risk security bug, we often sit on the bug until it is reported publicly. We still research the bug and come up with tentative solutions, but we don't patch the problem.'"
i didn't rtfa (Score:5, Insightful)
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
If they could release security patches invisibly, they probably would. Unfortunately, there's no way to do that.
Re: (Score:2)
Re: (Score:2)
The summary missed those parts. (Score:4, Insightful)
#1. If we spend time fixing those bugs, we won't have as much time to fix the important bugs.
Translation: we put in so many bugs that we don't have time to fix them all.
#2. We give priority to any bugs that someone's leaked to the press.
Translation: we only fix it if you force us to.
#3. "Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software."
I had to post that verbatim. They're releasing new bugs in their patches.
#4. "Fourth, every disclosed bug increases the pace of battle against the hackers."
Yeah, that one too. The more bugs they fix, the faster the
#5. If we don't announce it, they won't know about it.
Great. So your customers are at risk, but don't know it.
Re: (Score:2)
Are you incapable of thinking reasonably, or do you just like pointing fingers?
Re:The summary missed those parts. (Score:5, Insightful)
You missed the point about "patches"? (Score:2)
No one is arguing that.
... or not.
The discussion is about whether the attempt should be made to address ALL of those
Where did I say that?
They SHOULD be prioritized. No sense in trying to patch a local user, non-exploitable crash bug when you have a remote root vulnerability (with exploit).
But the system is
Re: (Score:3, Informative)
Re:The summary missed those parts. (Score:5, Insightful)
Now I'm off to read the article and see if my theories match up with their logic...
Re:The summary missed those parts. (Score:4, Insightful)
Trivial and meaningless statement. There is good code and bad code. Good code is code with fewer bugs. Bad code is code with many bugs. A good developer is one who designs the code to avoid bugs, and who, more importantly, fixes the bugs they find. A bad developer uses the above truism as an excuse to avoid fixing their shitty code.
Why do you have a problem with assigning priorities to issues that need fixing?
When one of those priorities is "don't fix until our customers find out, and try to keep them from finding out" then I have a problem with it.
The only thing that should distinguish a high priority bug from a low priority bug is: Do we fix it, then release the patch as an urgent hotfix? Or do we fix it, then release the patch as part of a periodic security update so that we have more time to test and so sysadmins aren't overwhelmed having to apply and test patches all the time? There is no priority that should read "Do not fix, unless we get bad P.R. for it."
The only developer who would do such a thing is a bad developer who is okay with leaving their customers exposed. Of course the reason they got into that situation, of having so many security issues that they can't afford to fix them all, is due to them being bad developers.
Are you incapable of thinking reasonably, or do you just like pointing fingers?
You need to drag your brain out of its pie-in-the-sky abstract concepts like "do you have a problem with priorities" and start actually thinking about the situation before you start saying things like this.
Re:The summary missed those parts. (Score:5, Informative)
I had to post that verbatim. They're releasing new bugs in their patches.
Partially true. By doing a bindiff between the old binaries and new binaries, they can see things like "Interesting, they're now using strncmp instead of strcmp. Let's see what happens when I pass in a non-null terminated buffer..." or "they're now checking to make sure that parameter isn't null" or whatever.
The defects were there before, but the patches give hackers a pointer that basically says "Look here, this was a security hole". Since end-users are / were really bad about patching their systems in a sane time frame, this gives the hackers a window of opportunity to exploit an exploit before they all patch up.
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
So, of course, being a regular patcher, I'd prefer that they patch th
Re:The summary missed those parts. (Score:4, Insightful)
No, they are fixing old bugs. Old but unknown bugs, which now become known to hackers, who can go and abuse the vulnerabilities wherever they didn't get patched yet. It's pretty old news, really.
Re:The summary missed those parts. (Score:4, Informative)
That's not how I read the response, not that how I read it is better.
What I got from reading the entire paragraph about that was that they patch the exact issue found, but do a terrible job of making sure that the same or similar bugs are not in other similar or related parts of the code. Hackers then see the bug fix report then go look for similar exploits abusing the same bug in other parts of the program. These new exploits would not be found if they hadn't fixed and published the first one.
This is not any better than causing new security issues with their security patches, but let's at least bash them for the right thing.
Yeah, I can see that. (Score:2)
Re: (Score:2)
Re:The summary missed those parts. (Score:4, Interesting)
Ideally the stuff should be reasonably secure out the gate; sure, they're talking about all the reasons they have for not patching after the fact, and all this stuff is true...Patching is a huge pain in the ass for everyone involved. But dammit, the amount of patching that gets done is inexcusable!
The thing that burns me is, you know that the developers don't incorporate those "tentative" fixes into the next product release either, not until the bugs make it public. You know that there is middle management who is perfectly aware of significant poor design decisions that could be solved by a well-planned rewrite, who instead tell their people to patch and baby weak code, because the cost of doing it right would impact their deliverables.
Re: (Score:2)
Re: (Score:3, Interesting)
Point #5 is interesting in that this guy ASSUMES that because a bug hasn't been made public the crackers don't know anything about it. It is just as likely to assume that if even ONE person found the bug that person could be a cracker. Most folks don't look for vunerabilities, but crackers do.
Microsoft may know about a vulnerability for months, or longer, before it issues a patch AND the announcement on the same day, IF it ever does. Not all holes are found by researchers. Meanwh
Should Vendors Close All Security Holes? (Score:2, Insightful)
Re: (Score:3)
If it's a problem that people know about and could be serious, then I think it should definitely be fixed ASAP.
Re:Should Vendors Close All Security Holes? (Score:5, Insightful)
Closing all vulnerabilities is not practical. In any sufficiently complex piece of software, there will be bugs and security holes. Obviously, you need to close the nasty ones, but many of these exploits are not particularly high risk. In these cases, especially if the fix would involve a major redesign or other highly disruptive solution, it may be best to just leave them alone.
If, for example, the underlying design of your product allows for a minor, difficult to exploit security hole, it is probably not worth it to spend the time and money to redesign the product. More likely, your choices would be either a.) live with the (small) vulnerability, or b.) scrap the product entirely.
The decision to close a security hole should be dependent on the potential impact of the hole, the urgency of the issue (are there already exploits in the wild, for example), and how many resources (time and money) it will take to fix it.
Re: (Score:2)
Once you have developed a fix, it is completely unethical to wait indefinitely to release the fix. The longest acceptable wait is until the end of the code audit to look for similar holes in other parts of the code base. This should only take a few months.
Hear Hear (Score:2)
Having worked on commercial software for a few years now, I'd have to agree with the parent. All complex programs come with bu
Re: (Score:2)
I hate to say this, but if your software is so complex that it is impossible to fix all the security holes... Then maybe you shouldn't make it so complex.
I mean even something as complex as OS X has security holes, but not so many that it requires the developers to throw their hands in the air and say "Oh we give up!" at some point.
Seriously, if your product is so complex and po
Re: (Score:2, Insightful)
I don't think you understand that. (Score:2)
And that explains the massive growth of Linux and other Open Source apps.
It's "bullshit" to you, but it's very real to the customer. And the customers are moving because of it.
Re: (Score:2, Insightful)
NO
ACTIVEX IS A FEATURE!
Two words: Exploit Chaining (Score:5, Interesting)
Re:Two words: Exploit Chaining Be... (Score:2)
In code space, noone can hear you scdream...
This is a GOOD case of "chain" smoking...
Maybe they need to debug their approach? Is anyone else bugged by this?
Waste of time and money (Score:3, Insightful)
Re: (Score:3, Informative)
Patch and next version are different things. They fix the hole but don't release a patch. The fix is released in the next version.
Words Important (Score:2)
They come up with a tentative solution, but they don't spend the time to do full testing on it unless it's a critical security hole. Why? As mentioned, they prefer to:
1) Use limited resources to focus on finding critical problems
2) Not introduce new code to a known system unless necessary
3) Not put their real-world, paying-customers-who-don't/can't-patch in danger.
Again, the letter makes clear
Re: (Score:2)
1. Poor quality of coding tends to concentrate around a particular feature because they're all written by the same person or team. Security patches indicate poor quality of code.
2. Security patches provide some insight for reverse engineering, so this allow someone to find more vulnerabilities around that p
Their arguments: 1-5 (Score:5, Informative)
1. It is better to focus resources on high risk security bugs.
2. We get rated better (internally and externally) for fixing publically known problems.
3. Hackers usually find additional bugs by examining patches to existing bugs, so a patch could expose more bugs than fixes are available for.
4. If we disclose a bug and fix it, it just escalates the "arms race" with the hackers. Better to keep it hidden.
5. Not all customers immediately patch. So by announcing a patch to previously unknown to the public bug, we actually exponentially increase the chances of that bug being exploited by hackers.
Re: (Score:2)
6 of one, half a dozen of the other (Score:3, Insightful)
Bugs should be fixed (Score:5, Interesting)
Fixing bugs can also expose other bugs (Score:2)
Just as often the reverse applies. A bug often shadows other bugs. Take away the main bug and there's just another right behind it which might even be worse. This is why you don't just "shoot from the hip" when fixing bugs.
Procrastination? (Score:2)
Seems like a small patch wouldn't be that much trouble and would avoid much larger problems...
Re:Procrastination? (Score:5, Insightful)
Now, that's bugs, which is a wider category than security holes. So, suppose that instead of crashing, it very rarely and briefly enters a state in which a particular sequence of bytes sent to it via the net can cause it to execute arbitrary code. Furthermore, suppose the program should never be running as root, so the arbitrary code is nearly useless. This is a low risk security hole, and probably not worth patching.
Could take hundreds of man-hours to find the cause, and perhaps even longer to fix. Probability of ever seeing this exploited is very very low. Should it then be patched?
Re: (Score:2)
Could take hundreds of man-hours to find the cause, and perhaps even longer to fix. Probabi
Re: (Score:3, Informative)
You sound like a person who needs a good debugger. Take gdb for example. You can ask your customer to send in the core dump file, which the program produces during the crash, then you load this core dump into gdb and not only will you get the exact location of the crash, you can also check where it was called and what values each va
Re:Procrastination? (Score:5, Interesting)
Your example is way to simplistic. I've seen core dump cases in which it was perfectly clear why it was crashing: the data structure got into a logically inconsistent state that it should never be in. The question is how and when. In case of big data structures (in some of the cases I've had to deal with: hundreds of thousands of objects, built gradually and modified heavily during several hours) finding the exact sequence that causes the inconsistency can be a nightmare. Plus, it might have been sitting around in this state for a long time before the program actually enters a path that leads to a crash.
Also, the worst type of bug is the Heisenbug: those that go away as soon as you enable debugging, or add even just a single line of extra code to monitor something while the stuff is running. I've seen my share of those as well. Sometimes persistance pays off and you find the root cause within hours or days, but sometimes reality forces you to give up. It's no use spending five weeks fruitlessly looking for a rare intermittent bug triggered by a convoluted internal unit test case if at the same time easily tracable bugs are being reported by paying users that need a solution within a week.
Re: (Score:2)
C: the fix may cause a bug or other issues. Something may stop working.
It also depends on the security problem. If it is a local exploit then it may not be worth fixing right then.
I think everyone is confused here. These are not exploits that they have closed and just haven't decided to end out the patch. These are exploits that the haven't created the patch for. A security team as limited resources. They may have x exploits so it is only logical to fix the most critical first.
Re: (Score:2)
If I believed that they would actually fix all the bugs before moving on to the next revision and adding new features, I might agree with you.
What ends up happening is that a whole pile of bug fixes end up in the next revision but never fixed in the last product, as a means to force you to upgrade. Over time the company gets more and more lax about bugfixes because those bugfixes guarantee their revenue stream! Then they get a r
Re: (Score:2, Insightful)
You have the choice of, A: Patching immediately, costing you a few hours of time from a couple of your employees or B: Hoping that it won't be a big risk effectively betting a few hours of time against the possibility of a huge security breach and the corresponding bad press that comes with that.
Not that simple. Developing a patch does not fix a security hole. Releasing a patch does not fix a security hole. Applying a patch fixes a security hole, if all goes well. When you combine the fact that the number of holes in existence is multiplied by the number of installations, with the fact that the development team very rarely has any power over when patches are actually applied, security through obscurity doesn't look so cut-and-dried naïve versus publicizing your holes by releasing patches.
No
Yup (Score:2, Insightful)
Author is Right (Score:5, Interesting)
Option 1:
Exploit is published, patch is delivered really quickly. Sysadmin thinks, "Those guys at company X are on top of it..." PHB will say the same.
Option 2:
Unilaterally announce fixes, make patches available. Sysadmin doesn't bother to read the whole announcement and whines because it makes work she doesn't understand or think is urgent. PHB thinks "Gee company X's software isn't very good, look at all the patches..."
The market for secure software is small even smaller if you add standards compliance. Microsoft is a shining example of how large the market is for insecure, non-standard software.
Depends on what you call a security hole.... (Score:3, Insightful)
Not likely to be fixed completely - In some ways, Windows is a security hole
Could be fixed if escalates - password strength and use
Should be fixed - Lack of any authorization requirements etc.
If you remember the Pinto car-b-ques, there is a money factor to think about. Since most standard computing systems are not life-critical, some bugs can be left for later. Some bugs you might know about but they are not in your code such as those shipped with the networking stack of the RTOS that you use for an embedded product. An insecure FTP client on an embedded machine that has no access to other machines or sensitive material is not terribly bad.
On the other hand, if the machine can be compromised and allow the attacker access to other machines... that needs to be fixed.
Re: (Score:2)
Why was this modded as troll - just because of the statement, "In some ways, Windows is a security hole"?
There is some truth in this, IMHO. An example of how one could consider this true is that Microsoft no longer offers patches for older versions of Windows. I understand it is at the users disgression to keep up to date with costly Windows updates and to keep up with patches, but I still maintain that there is some truth to this statement, even if it is only a little bit of truth. Furthermore, any OS
A car analogy... (Score:4, Insightful)
Was not it GM, that lost millions of dollars a few years ago in a lawsuit brought by people (and their kin), whose car was rear-ended on a toll plaza and exploded in flames?
GM's arguments, that making the car's fuel-tank more protected was too expensive for the modicum of additional safety that would've provided, were — for better or worth — ignored by the jury...
In other words, you may not deem a security hole to be large compared with the expense of pushing out another patch, but if somebody gets hurt, and their lawyer subpoenas your internal e-mails on the subject, you may well be out of business.
Re:A car analogy... (Score:5, Interesting)
1. The Pinto even before the "fix" didn't have the highest death rate in it's class. Other small cars had the same death per mile or worse.
2. The NTSA had the dollars per rate figure in the national standards for safety and Ford referenced in in their internal documentation which the lawyer used in the case.
3. Had Ford not identified the risk that a bolt posed to the fuel tank and documented it they probably wouldn't have lost so big in court.
Just thought I would try and kill a myth with some truth. Probably will not work but it is worth a shot.
Re: (Score:2)
And yes, Ford's knowing the fuel-tank explosions could happen counted against it if it didn't try to recall affected Pintos.
Ford, forgetting and repeating history, put exploding gas tanks in several model years of Crown Victorias recently. Again, there was only a slim chanc
Re: (Score:2)
Yes, had there not been evidence that they knew that there was a problem, knew how significant it was and how easy to fix, and chose not to do so, things would have gone better for them in court.
Not sure how that is part of a "problem", either with the popular perception's relation to the facts (since its part of the popular perception and central to the reason its held up a
Re:A car analogy... (Score:5, Funny)
Let me guess - you're a LISP programmer?
Re:A car analogy... GM would probably fix a (Score:2)
(CAPTCHA: DISCREET)
What do they have to gain? (Score:2)
Besides, aren't there liability issues with knowingly shipping a product with undisclosed defects? What if they underestimate the severity of a vulnerability? How can they
Re: (Score:3, Insightful)
The fix the problem as soon as they discover it. The next release of the product does not have the problem. If the
problem becomes public before the next release, then they immediately issue the patch for it and hope that people
patch.
As long as they release often enough that the fixes are largely in place before the problems are found, I have no
issue with this. It actually seems responsible since it posses less
Re: (Score:2)
Re: (Score:2)
(2) possible outcomes (Score:3, Insightful)
That other one: Someone exploits the bug to a degree you and your team never considered and your user community is devastated.
...and... (Score:2)
yes (Score:3, Insightful)
Proposal (Score:2, Interesting)
"Welcome, JoeUser. This is WidgetMax v2.0.3. As a reminder, this product may contain security holes or exploits that we already know about and don't want to spend the money to fix because we internally classify them as low-to-medium risk."
I'm not saying it's necessarily wrong -- budgets are finite -- but keeping policies internal because of how they w
Not fix bugs? Not a good idea. (Score:2)
A security hole is a bug, plain and simple. There's no excuse for deliberately not fixing a bug. Now, you can make an argument that if the bug's minor and not causing customer problems you should hold the fix for the next regularly-scheduled release, but that's about it. The argument that unannounced holes don't seem to be being exploited is particularly disingenuous. People aren't looking for exploits of holes they don't know about. It's not surprising, then, that few people are reporting problems they are
Why is software different than tangible things? (Score:3, Interesting)
If an automaker or toy manufacturer didn't issue a recall on a minor safety issue immediately, they'd get tons of bad press. But a software company can sit on just about any security bug indefinitely (I'm looking at you, Microsoft) and few people care.
I suspect 2 factors are at work here:
#2 probably won't ever happen industry wide, and until the public understands how much impact software security can have, they won't care.
100% Correct -- for many reasons (Score:5, Interesting)
Outside of terribly serious security holes, security holes are only security holes when they become a problem. Until then, they are merely potential problems. Solving potential problems is rarely a good idea.
We're not talking about tiny functions that don't consider zero values. We're talking about complex systems where every piece of programming has the potential to add problems not only to the system logic, but also to add more security holes.
That's right, I said it -- security patches can cause security holes.
It is our standard practice not to touch things that are working. Not every application is a military application.
I'll say it again. Not every application is a military application.
Your front door has a key-lock. That's hardly secure -- they are easily picked. But it's secure enough for most neighbourhoods.
So the question with your software is: when does this security hole matter, and how does it matter. If it's only going to matter when a malicious person attacks, then the question comes down to how many attackers you have. And if those attackers are professional, you might as well make their life easier, because they'll get in eventually in one way or another -- I'd rather know how they got in and be able to recover.
How it matters. If it reveals sensitive information, it matters. If it causes your application to occasionally crash, when an operator is near-by, then it doesn't matter.
There are omre important things to work on -- and many of these minor security holes actually disappear with feature upgrades -- as code is replaced.
Re:100% Correct -- for many reasons What...? (Score:2)
I suppose that is why some software companies use the S/W equiv of Poly-Razz-Ma-Tazz... overwhelm the user with a plethora of "features" so that metrics look better when ONE otherwise major bug would stand out in a small-feature program/app. But, then many customers (and devs/publishers) cannot see the forest for the trees.
Re: (Score:2, Insightful)
When given the option between a week of effort to fix a security hole (for free), or a week of effort to build a new feature (for cost), not all but most of our clients prefer the latter. They would rather grow their business
Re: (Score:2)
Says you. Try deploying a restaurant point-of-sale system that only crashes "occasionally". You'll have managers just LOVING your software when it goes down during rush hour. It matters...
The OpenBSD experience (Score:2)
It worked for OpenBSD, though conceivably they could have gotten the same results with less labor.
Their policy was to audit code looking for problems, and then fixing every problem they found without even checking whether it was exploitable.
Interestingly, one result was that OpenBSD became unusually difficult to crash.
Not many projects are willing to set up their priorities the way the OpenBSD team has, and there are reasons.
Re: (Score:2, Interesting)
Regarding the rotten branch, I'd hope that falls under the other half of my comment regarding dangerous things taht really matter. Regarding the windows, yeah I live in a very safe neighbourho
If you HAVE a solution, you should fix it. (Score:3, Insightful)
I could see not waiting till your next regular patch, so as to avoid bringing it to the attention of the hackers.
But the rest of his arguments are pretty crappy.
Uh.. A/V bad? (Score:2)
"It's possible that if anti-virus software had never been created, we wouldn't be dealing with the level of worm and bot sophistication that we face today."
And if we didn't use antibiotics we wouldn't be seeing the current evolutionary pace of biological malware.
TFA presents some points for discussion, but this doesn't strike me as one of them.
Did I really just type 'biological malware?'
Regards.
Just because Microsoft acts that way... (Score:2)
I really want to know what company the "reader" works for, so I can add them to my shit list. I don't want to support such abhorrent security practices.
And remember: Friends don't let friends buy Microsoft.
Risk of litigation (Score:2, Insightful)
fraud? (Score:2)
Trying to artificially inflate your bugfix ratings or trying to save money is a benefit.
I dont see how this company expects to evade legal CRIMINAL responsibility when someone is harmed because of a pre-existant security problem they knew about but did not disclose at the time of sale.
there is no taxonomy of problems (Score:2)
It can never happen to you Probability = 0.0
It might happen to you Probability 0.0 x 100.0
The problem is that we don't know this when we hear of a problem. All we hear about is the theoretical problem and the probability of a theoretical problem being theoretically true = 100.0%. If we had a way to neatly classify vulnerabilities into both the probability of occurence and the probability that som
over working codes and dumb PHB's does not help (Score:2)
When you rush things out to meet a clueless PHB dead line things get passed over.
When you cut funding some times you don't have the hardware to do full testing and you end up with test code on the sever or a desktop turned in to a test sever.
When you waste the codes time on crap like TPS reports useless meetings to tall about way things are running late that does not help.
Hit em in the pocket (Score:2)
In the case of MS, how often do they close one hole only to open up another. I don't want to throw OS' around, but look at team OpenBSD, regardless of the smug attitudes, you have to give Theo and his group credit. They don't release for the sake of keeping up with the Jones'. They're methodical and accurately screening and scrutinizing what their OS does, what its supposed to do and how it does it.
The i
I once worked for a place like that (Score:5, Interesting)
We too "trained" our coders in the art of secure programming. The problem, of course, is that we were also training them in basic things like what a C pointer is and how to not leak memory. Advanced security topics were over their head. This is the case in 9 out of 10 places I've worked. The weak links, once identified, can't be fired. No... these places move them to critical projects to "get their feet wet."
At the security giant, training only covered the absolute basics: shell escaping and preventing buffer overflows with range checking. The real problem is that only half of our bugs were caused by those problems. The overwhelming majority were caused by poor design often enabled by poor encapsulation (or complete lack of it).
There were so many use cases for our product that hadn't been modeled. Strange combinations of end-user interaction had the potential to execute code our appliance. Surprisingly, our QA team's manual regression testing (click around our U.I.) never caught these issues, but did catch many typos.
I don't believe security problems are inevitable. I've been writing code for years and mine never has these problems (arrogant, but mostly true). I can say, with certainty, than any given minor-version release has had 1,000's of high-quality tests performed and verified. I use the computer, not people... so there's hardly any "cost" to do so repeatedly.
I run my code through the paces. I'm cautious whenever I access RAM directly. My permissions engines are always centralized and the most thoroughly tested. I use malformed data to ensure that my code can always gracefully handle garbage. I model my use cases. I profile my memory usage. I write automated test suites to get as close to 100% code coverage as possible. I don't stop there. I test each decision branch for every bit of functionality that modifies state.
Aside from my debug-time assertions, I use exception handling quite liberally. It helps keep my code from doing exceptional things. Buffer overflows are never a problem, because I assume that all incoming data from ANY source should be treated as if it were pure Ebola virus. I assume nothing, even if the protocol / file header is of a certain type.
Security problems exist because bad coders exist. If you code and you know your art form well, you don't build code that works in unintended ways. Proper planning, good design, code reviews, and disciplined testing is all you need.
Re: (Score:2)
When would you consider that you are using RAM directly? Do you mean when you are using low-level functions like malloc() and free() or any time you access an object on the heap? Heck, even the stack is in RAM for that matter. It makes sense to be cautious when allocating memory and accessing it without a memory manager, but when using higher-level languages like Java or C# it doesn't seem to be as critical (since an exception will be thrown which is easily caught when trying to access invalid memory areas)
Re: (Score:3, Insightful)
Unfortunately there seems to be very few companies willing to budget (in time or resources) for any more than two of these, let alone all four. And even more unfortunately, the past 40 years of commercial software seem to suggest that such miserliness has been a profitable decision.
Thats retarded (Score:2)
Please give me the name of this guy's company so I can avoid all their products.
Then there's the ethics and responsibility args (Score:3, Informative)
Litmus Test (Score:3, Insightful)
Fixing holes (Score:2, Insightful)
History shows that there are lots of black hats that will sit on security breaches/expploits/bugs/etc and exploit them for their own end rather then reporting them to a company. Breaches in security should be patched as soon as they are discovered. If 1 person found the bug/hole/exploit/whatever, that means another person
Re: (Score:2)
But I'm not sure on the practice, what happens when a disgruntled employee leaves? Did he have access to that information?
Yes (Score:3, Interesting)
Re: (Score:2)
There always is a question (Score:2)
Simple truth is - for any software with sufficient amount of users there will be users that will manage to find even most strange "features" and often exploit to their advantage (as in - it benefits them actially).
As long as you are facing the unknown (users that might actuall think that any particular issue is a feature and use them) versus known ("yes, there seems to be a problem with IP parsing if using nonstandart syntax, but nobody has compl
the OpenBSD approach (Score:2)
Downside to always patching (Score:5, Insightful)
Of course not! (Score:4, Insightful)
I'm shocked by how many people answer this with an unqualified "Yes." That's not realistic at all.
Here's an example. An app asks for your password. That password gets written to memory for a period of time. During that time, the page containing the password may get swapped to disk. The system may then crash or lose power, leaving the password on disk. I could then boot into another OS, read the swap file, and recover your password. Unlikely, but possible.
There, I "found" a security hole. Want to patch it? You could try to make every app that uses a password store it in wired (not swappable) memory - but performance will suffer (and good luck doing that in every app). You could also dynamically encrypt all data that gets written to the swap file, and decrypt it when read, at the cost of performance again.
Are you willing to trade performance in every app to defend against this mostly theoretical vulnerability? Probably not. Security is about tradeoffs. Welcome to the realities of software development.
Re: (Score:3, Informative)
Ever heard of mlock [die.net]? You don't need to make the whole application non-swappable, just the page that contains the password. And the call is trivial to use.