Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security IT

Should Vendors Close All Security Holes? 242

johnmeister writes to tell us that InfoWorld's Roger Grimes is finding it hard to completely discount a reader's argument to only patch minimum or low security bugs when they are publicly discovered. "The reader wrote to say that his company often sits on security bugs until they are publicly announced or until at least one customer complaint is made. Before you start disagreeing with this policy, hear out the rest of his argument. 'Our company spends significantly to root out security issues,' says the reader. 'We train all our programmers in secure coding, and we follow the basic tenets of secure programming design and management. When bugs are reported, we fix them. Any significant security bug that is likely to be high risk or widely used is also immediately fixed. But if we internally find a low- or medium-risk security bug, we often sit on the bug until it is reported publicly. We still research the bug and come up with tentative solutions, but we don't patch the problem.'"
This discussion has been archived. No new comments can be posted.

Should Vendors Close All Security Holes?

Comments Filter:
  • by Gary W. Longsine ( 124661 ) on Monday May 14, 2007 @03:17PM (#19119077) Homepage Journal
    Exploit Chaining [blogspot.com] means that low risk holes can become high risk hole when combined. Patch them all. Patch them quickly.
  • Re:i didn't rtfa (Score:3, Interesting)

    by WrongSizeGlass ( 838941 ) on Monday May 14, 2007 @03:22PM (#19119163)
    I guess this guy only locks each door or window in his house and car after someone has discovered that it's unlocked? I sure hope his kids live with their mom.
  • Bugs should be fixed (Score:5, Interesting)

    by Anarchysoft ( 1100393 ) <anarchy@anaSLACK ... com minus distro> on Monday May 14, 2007 @03:22PM (#19119167) Homepage

    "Our company spends significantly to root out security issues," says the reader. "We train all our programmers in secure coding, and we follow the basic tenets of secure programming design and management. When bugs are reported, we fix them. Any significant security bug that is likely to be high risk or widely used is also immediately fixed. But if we internally find a low- or medium-risk security bug, we often sit on the bug until it is reported publicly. We still research the bug and come up with tentative solutions, but we don't patch the problem."
    I don't believe this is a prudent approach. Bugs often cause (or mask) more problems than the issue causing the bug to be fixed. In other words, fixing a bug causing a known issue can also fix several unknown issues. Without a significant reason to not do so (such as a product that has beene completely replaced in a company with very limited resources,) it is irresponsible to not fix bugs. The debatable point is how long small bugs should be allowed to collect before issuing a point release.
  • Author is Right (Score:5, Interesting)

    by mpapet ( 761907 ) on Monday May 14, 2007 @03:24PM (#19119217) Homepage
    Pre-emptive disclosure works against the typical closed source company.

    Option 1:
    Exploit is published, patch is delivered really quickly. Sysadmin thinks, "Those guys at company X are on top of it..." PHB will say the same.

    Option 2:
    Unilaterally announce fixes, make patches available. Sysadmin doesn't bother to read the whole announcement and whines because it makes work she doesn't understand or think is urgent. PHB thinks "Gee company X's software isn't very good, look at all the patches..."

    The market for secure software is small even smaller if you add standards compliance. Microsoft is a shining example of how large the market is for insecure, non-standard software.
  • Proposal (Score:2, Interesting)

    by PaladinAlpha ( 645879 ) on Monday May 14, 2007 @03:28PM (#19119301)
    I think developers and companies should think long and hard about how such policies would be received if the end-user were presented with them in plainspeak.

    "Welcome, JoeUser. This is WidgetMax v2.0.3. As a reminder, this product may contain security holes or exploits that we already know about and don't want to spend the money to fix because we internally classify them as low-to-medium risk."

    I'm not saying it's necessarily wrong -- budgets are finite -- but keeping policies internal because of how they would be viewed publicly is deceiving your customer, full stop. Also, these guys are setting themselves up as final arbiter of what's risky in code exploits. It makes one uncomfortable to think about.
  • by Dracos ( 107777 ) on Monday May 14, 2007 @03:35PM (#19119437)

    If an automaker or toy manufacturer didn't issue a recall on a minor safety issue immediately, they'd get tons of bad press. But a software company can sit on just about any security bug indefinitely (I'm looking at you, Microsoft) and few people care.

    I suspect 2 factors are at work here:

    1. The general public doesn't care about software security because it doesn't effect their daily lives
    2. There's no "think of the children!" emotional aspect to software

    #2 probably won't ever happen industry wide, and until the public understands how much impact software security can have, they won't care.

  • by holophrastic ( 221104 ) on Monday May 14, 2007 @03:36PM (#19119453)
    We do the same thing. Every company has limited resources, and every decision is a usiness priority decision. So the decision is always between new features and old bugs.

    Outside of terribly serious security holes, security holes are only security holes when they become a problem. Until then, they are merely potential problems. Solving potential problems is rarely a good idea.

    We're not talking about tiny functions that don't consider zero values. We're talking about complex systems where every piece of programming has the potential to add problems not only to the system logic, but also to add more security holes.

    That's right, I said it -- security patches can cause security holes.

    It is our standard practice not to touch things that are working. Not every application is a military application.

    I'll say it again. Not every application is a military application.

    Your front door has a key-lock. That's hardly secure -- they are easily picked. But it's secure enough for most neighbourhoods.

    So the question with your software is: when does this security hole matter, and how does it matter. If it's only going to matter when a malicious person attacks, then the question comes down to how many attackers you have. And if those attackers are professional, you might as well make their life easier, because they'll get in eventually in one way or another -- I'd rather know how they got in and be able to recover.

    How it matters. If it reveals sensitive information, it matters. If it causes your application to occasionally crash, when an operator is near-by, then it doesn't matter.

    There are omre important things to work on -- and many of these minor security holes actually disappear with feature upgrades -- as code is replaced.
  • Bullshit. (Score:1, Interesting)

    by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday May 14, 2007 @03:39PM (#19119547)

    Closing all vulnerabilities is not practical.

    Then running that software is not "practical". Any vulnerability is a vulnerability.

    In any sufficiently complex piece of software, there will be bugs and security holes.

    And "sufficiently complex" is defined as having "bugs and security holes". No. That just means that there is no such thing as "security". And we've been over that before.

    Obviously, you need to close the nasty ones, but many of these exploits are not particularly high risk.

    Right. And enough of those "not particularly high risk" vulnerabilities can be linked together to crack your machine as surely as 1 remote root exploit could be.

    In these cases, especially if the fix would involve a major redesign or other highly disruptive solution, it may be best to just leave them alone.

    "Best" in this case is being defined as "best for the company selling the software" and NOT "best for the people using that software".

    What would be best for the users is the knowledge that there are fundamental security issues and that they might want to use a competitor's product.

    If, for example, the underlying design of your product allows for a minor, difficult to exploit security hole, it is probably not worth it to spend the time and money to redesign the product.

    Again, "not worth it" from the point of view of the vendor. Let's be clear on that.

    The decision to close a security hole should be dependent on the potential impact of the hole, the urgency of the issue (are there already exploits in the wild, for example), and how many resources (time and money) it will take to fix it.

    How would you KNOW if there were already exploits in the wild? Unless someone was advertising them.

    That approach means that an exploit can sit for years at the "unimportant" level ... until one day it hits the "EMERGENCY!!! FIX IT NOW!!!" level.

    Yeah, that's the kind of support I want from my vendors.
  • Re:A car analogy... (Score:5, Interesting)

    by LWATCDR ( 28044 ) on Monday May 14, 2007 @03:41PM (#19119563) Homepage Journal
    It was Ford and it was the Pinto. The problem is.
    1. The Pinto even before the "fix" didn't have the highest death rate in it's class. Other small cars had the same death per mile or worse.
    2. The NTSA had the dollars per rate figure in the national standards for safety and Ford referenced in in their internal documentation which the lawyer used in the case.
    3. Had Ford not identified the risk that a bolt posed to the fuel tank and documented it they probably wouldn't have lost so big in court.

    Just thought I would try and kill a myth with some truth. Probably will not work but it is worth a shot.

  • by SatanicPuppy ( 611928 ) * <Satanicpuppy.gmail@com> on Monday May 14, 2007 @03:51PM (#19119787) Journal
    The only argument that makes any sense to me is, "Every time we force our customers to patch systems, we run the risk of creating incompatibilities and getting slews of angry phone calls, and that'll screw up our week" and they didn't even include that one.

    Ideally the stuff should be reasonably secure out the gate; sure, they're talking about all the reasons they have for not patching after the fact, and all this stuff is true...Patching is a huge pain in the ass for everyone involved. But dammit, the amount of patching that gets done is inexcusable!

    The thing that burns me is, you know that the developers don't incorporate those "tentative" fixes into the next product release either, not until the bugs make it public. You know that there is middle management who is perfectly aware of significant poor design decisions that could be solved by a well-planned rewrite, who instead tell their people to patch and baby weak code, because the cost of doing it right would impact their deliverables.
  • by pestilence669 ( 823950 ) on Monday May 14, 2007 @03:54PM (#19119835)
    The place I worked for was a security company. They had no automated unit testing and never analyzed for intrusions. You'd be shocked to find out how many holes exist on devices people depend on to keep them safe. The employees took it upon themselves (subverted authority) to patch our product. Security problems, even on security hardware, were not "priority" issues.

    We too "trained" our coders in the art of secure programming. The problem, of course, is that we were also training them in basic things like what a C pointer is and how to not leak memory. Advanced security topics were over their head. This is the case in 9 out of 10 places I've worked. The weak links, once identified, can't be fired. No... these places move them to critical projects to "get their feet wet."

    At the security giant, training only covered the absolute basics: shell escaping and preventing buffer overflows with range checking. The real problem is that only half of our bugs were caused by those problems. The overwhelming majority were caused by poor design often enabled by poor encapsulation (or complete lack of it).

    There were so many use cases for our product that hadn't been modeled. Strange combinations of end-user interaction had the potential to execute code our appliance. Surprisingly, our QA team's manual regression testing (click around our U.I.) never caught these issues, but did catch many typos.

    I don't believe security problems are inevitable. I've been writing code for years and mine never has these problems (arrogant, but mostly true). I can say, with certainty, than any given minor-version release has had 1,000's of high-quality tests performed and verified. I use the computer, not people... so there's hardly any "cost" to do so repeatedly.

    I run my code through the paces. I'm cautious whenever I access RAM directly. My permissions engines are always centralized and the most thoroughly tested. I use malformed data to ensure that my code can always gracefully handle garbage. I model my use cases. I profile my memory usage. I write automated test suites to get as close to 100% code coverage as possible. I don't stop there. I test each decision branch for every bit of functionality that modifies state.

    Aside from my debug-time assertions, I use exception handling quite liberally. It helps keep my code from doing exceptional things. Buffer overflows are never a problem, because I assume that all incoming data from ANY source should be treated as if it were pure Ebola virus. I assume nothing, even if the protocol / file header is of a certain type.

    Security problems exist because bad coders exist. If you code and you know your art form well, you don't build code that works in unintended ways. Proper planning, good design, code reviews, and disciplined testing is all you need.
  • Yes (Score:3, Interesting)

    by madsheep ( 984404 ) on Monday May 14, 2007 @04:14PM (#19120283) Homepage
    Yes, yes they should patch them all. Personally it'd eat away at me knowing I could spending a few minutes, hours, or days to fix a vulnerability in my software. I don't think I could take pride in what I do if I just leave crap like this around because I don't have to fix it and don't think it's important unless someone finds it publicly. I'm glad they fix the HIGHs (however they rate this.. who knows?) and the publicly disclosed ones. But why not fix the small ones as you find them? It's a little bit of embarassment every time an issue is found. This is one less piece of embarassment. However, maybe it's the quasi-perfectionist in me, I couldn't imagine not fixing this stuff.
  • by Jerry ( 6400 ) on Monday May 14, 2007 @04:32PM (#19120607)
    Excellent summary!!!

    Point #5 is interesting in that this guy ASSUMES that because a bug hasn't been made public the crackers don't know anything about it. It is just as likely to assume that if even ONE person found the bug that person could be a cracker. Most folks don't look for vunerabilities, but crackers do.

    Microsoft may know about a vulnerability for months, or longer, before it issues a patch AND the announcement on the same day, IF it ever does. Not all holes are found by researchers. Meanwhile, in order for the hole to have been discovered in the first place at least one, but probably thousands or more, have to get infected and report it. All we have to do to discover the results of "security by obscurity" is to see how vulnerable Windows/VISTA is. VISTA's "Defender" identified only 82.4% of the several thousand KNOWN bugs thrown at it. Other anti-virus software identified as many as 99.8%. So, last December when 2,400 bugs for Windows were discovered VISTA would have let about 14 per day infect itself.
  • by holophrastic ( 221104 ) on Monday May 14, 2007 @04:43PM (#19120797)
    Certainly your point is taken. Liability lawsuits would drastically change things. But in general, product manufacturers are not liable for malicious attacks. If you lock is picked, your locksmith is not to blame. And you can always purchase low-quality products without warranties -- like cheaper surge protectors.

    Regarding the rotten branch, I'd hope that falls under the other half of my comment regarding dangerous things taht really matter. Regarding the windows, yeah I live in a very safe neighbourhood -- you can punch through every glass window here, and most of them won't shatter so you won't even get hurt. But in general, picking locks may look suspicious, but it can be done in under ten seconds by anyone who has procticed for a few days.

    But that's exactly our points -- yours and mine. We don't get better locks, and we don't get better windows -- front or back. We simply don't worry about such attacks because we live in safe neighbourhoods -- safe by statistical standards. There's a break-in every few months around here, and expensive things are stolen. But these thieves get through police-patrolled neighbourhoods, alarm systems, and locks of all kinds. We don't improve them because most would get through anyway. And ultimately, there aren't enough break-ins to be concerned.
  • Re:Procrastination? (Score:5, Interesting)

    by mce ( 509 ) on Monday May 14, 2007 @05:02PM (#19121141) Homepage Journal

    Your example is way to simplistic. I've seen core dump cases in which it was perfectly clear why it was crashing: the data structure got into a logically inconsistent state that it should never be in. The question is how and when. In case of big data structures (in some of the cases I've had to deal with: hundreds of thousands of objects, built gradually and modified heavily during several hours) finding the exact sequence that causes the inconsistency can be a nightmare. Plus, it might have been sitting around in this state for a long time before the program actually enters a path that leads to a crash.

    Also, the worst type of bug is the Heisenbug: those that go away as soon as you enable debugging, or add even just a single line of extra code to monitor something while the stuff is running. I've seen my share of those as well. Sometimes persistance pays off and you find the root cause within hours or days, but sometimes reality forces you to give up. It's no use spending five weeks fruitlessly looking for a rare intermittent bug triggered by a convoluted internal unit test case if at the same time easily tracable bugs are being reported by paying users that need a solution within a week.

  • by Anonymous Coward on Monday May 14, 2007 @08:28PM (#19123879)
    How about embedding watermarks in the code. A different watermark along with causing the compilation to randomize function layout should confuse some people. Hopefully the evil attackers will develop a full featured reverse engineering package that can solve the halting problem.

    Everyone wins!!

If you have a procedure with 10 parameters, you probably missed some.

Working...