Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security IT

Should Vendors Close All Security Holes? 242

johnmeister writes to tell us that InfoWorld's Roger Grimes is finding it hard to completely discount a reader's argument to only patch minimum or low security bugs when they are publicly discovered. "The reader wrote to say that his company often sits on security bugs until they are publicly announced or until at least one customer complaint is made. Before you start disagreeing with this policy, hear out the rest of his argument. 'Our company spends significantly to root out security issues,' says the reader. 'We train all our programmers in secure coding, and we follow the basic tenets of secure programming design and management. When bugs are reported, we fix them. Any significant security bug that is likely to be high risk or widely used is also immediately fixed. But if we internally find a low- or medium-risk security bug, we often sit on the bug until it is reported publicly. We still research the bug and come up with tentative solutions, but we don't patch the problem.'"
This discussion has been archived. No new comments can be posted.

Should Vendors Close All Security Holes?

Comments Filter:
  • i didn't rtfa (Score:5, Insightful)

    by flynt ( 248848 ) on Monday May 14, 2007 @02:14PM (#19119015)
    I did not RTFA, but I did read the summary. I did not hear his argument, I heard his conclusion repeated with more words.
    • Re: (Score:3, Interesting)

      I guess this guy only locks each door or window in his house and car after someone has discovered that it's unlocked? I sure hope his kids live with their mom.
      • Re: (Score:3, Insightful)

        by Dan Ost ( 415913 )

        I guess this guy only locks each door or window in his house and car after someone has discovered that it's unlocked? I sure hope his kids live with their mom.
        The problem is that if they release a patch, they draw attention to the code that had the flaw, resulting in more hacker scrutiny than if they had quietly sat on the patch until the next release.

        If they could release security patches invisibly, they probably would. Unfortunately, there's no way to do that.
        • Security is not a zero sum system. The more holes you patch the more secure the program gets. Yes i know that a patches can introduce new bugs but that is usually not the case. I think this concept is based on the idea that there is always X amounts of bugs out there so why fix it when you won't be doing any good.
          • by Heembo ( 916647 )
            With respect, I do not agree. I believe that the more you factor a software system (and that includes bug patches) the more fragile the system becomes. There comes a time when you need to stop patching, and start over with a SDLC from the beginning. Security needs to be baked in from the beginning as a core requirement.
    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday May 14, 2007 @02:24PM (#19119213)
      Basically ...

      #1. If we spend time fixing those bugs, we won't have as much time to fix the important bugs.

      Translation: we put in so many bugs that we don't have time to fix them all.

      #2. We give priority to any bugs that someone's leaked to the press.

      Translation: we only fix it if you force us to.

      #3. "Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software."

      I had to post that verbatim. They're releasing new bugs in their patches.

      #4. "Fourth, every disclosed bug increases the pace of battle against the hackers."

      Yeah, that one too. The more bugs they fix, the faster the .... what the fuck?

      #5. If we don't announce it, they won't know about it.

      Great. So your customers are at risk, but don't know it.
      • No matter how good the QA testing is on a piece of software before it's released, it invariably has bugs and security risks. Why do you have a problem with assigning priorities to issues that need fixing?

        Are you incapable of thinking reasonably, or do you just like pointing fingers?
        • by Lord_Slepnir ( 585350 ) on Monday May 14, 2007 @02:46PM (#19119693) Journal
          I just don't think that the GP has ever worked on a large piece of software, or has worked in a business environment. Linux has some of the best minds in the world working on it, and it still has holes. Vista could have used a few more months being polished, but I can only imagine the threats of "Release now or else" coming from the headquarters.
        • No matter how good the QA testing is on a piece of software before it's released, it invariably has bugs and security risks.

          No one is arguing that.

          The discussion is about whether the attempt should be made to address ALL of those ... or not.

          Why do you have a problem with assigning priorities to issues that need fixing?

          Where did I say that?

          They SHOULD be prioritized. No sense in trying to patch a local user, non-exploitable crash bug when you have a remote root vulnerability (with exploit).

          But the system is

        • Re: (Score:3, Informative)

          The whole point of the article is that the company in question refrains from releasing a patch, even when they have a fix ready. This is not prioritization.
          • by SnapShot ( 171582 ) on Monday May 14, 2007 @04:25PM (#19121495)
            It could just reflect a realization that releasing a patch is as likely to introduce new bugs as it is to fix an existing bug. And, the patch identifies existing bugs which means that customers that don't install the patch are more vulnerable than they were before. Instead, you save up your fixes and your new features and your release new versions as a "dot release" and you reduce the number of versions out there in the wild. From a psychological standpoint your customers get new features and "updated security" instead of a never-ending series of security patches. Prioritization should go on behind the scenes, of course, so that the more critical fixes always make it into the latest release.

            Now I'm off to read the article and see if my theories match up with their logic...
        • by Chris Burke ( 6130 ) on Monday May 14, 2007 @02:54PM (#19119851) Homepage
          No matter how good the QA testing is on a piece of software before it's released, it invariably has bugs and security risks.

          Trivial and meaningless statement. There is good code and bad code. Good code is code with fewer bugs. Bad code is code with many bugs. A good developer is one who designs the code to avoid bugs, and who, more importantly, fixes the bugs they find. A bad developer uses the above truism as an excuse to avoid fixing their shitty code.

          Why do you have a problem with assigning priorities to issues that need fixing?

          When one of those priorities is "don't fix until our customers find out, and try to keep them from finding out" then I have a problem with it.

          The only thing that should distinguish a high priority bug from a low priority bug is: Do we fix it, then release the patch as an urgent hotfix? Or do we fix it, then release the patch as part of a periodic security update so that we have more time to test and so sysadmins aren't overwhelmed having to apply and test patches all the time? There is no priority that should read "Do not fix, unless we get bad P.R. for it."

          The only developer who would do such a thing is a bad developer who is okay with leaving their customers exposed. Of course the reason they got into that situation, of having so many security issues that they can't afford to fix them all, is due to them being bad developers.

          Are you incapable of thinking reasonably, or do you just like pointing fingers?

          You need to drag your brain out of its pie-in-the-sky abstract concepts like "do you have a problem with priorities" and start actually thinking about the situation before you start saying things like this.
      • by Lord_Slepnir ( 585350 ) on Monday May 14, 2007 @02:41PM (#19119569) Journal
        #3. "Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software."

        I had to post that verbatim. They're releasing new bugs in their patches.

        Partially true. By doing a bindiff between the old binaries and new binaries, they can see things like "Interesting, they're now using strncmp instead of strcmp. Let's see what happens when I pass in a non-null terminated buffer..." or "they're now checking to make sure that parameter isn't null" or whatever.

        The defects were there before, but the patches give hackers a pointer that basically says "Look here, this was a security hole". Since end-users are / were really bad about patching their systems in a sane time frame, this gives the hackers a window of opportunity to exploit an exploit before they all patch up.

        • Re: (Score:3, Insightful)

          by CastrTroy ( 595695 )
          However, if I was a systems admin, I'd much rather have the option to keep my systems secure by having the updates available then having the company sit around on fixes because they think the hackers don't already know what the bugs are. The most valuable tools a hacker has are those that nobody knows about. If people are aware of a bug, then it will be more likely that the hole isn't exploitable. If nobody knows about the bug, then you can catch a lot of people off guard, and break into a lot more syste
          • We just has this conversation on two different days last week. On both of those days I was modded down for suggesting that the Linux release-when-ready model offered greater security than the Microsoft release-when-we-feel-like-it model. You're not going to get a lot of agreement from people because they don't want to take responsibility for their own fuckups, and they want the vendor to do their job for them... and that includes telling them when they can patch.
          • I guess I agree with you, but that's because I, like you, am pretty good about keeping important things patched and up to date... so it becomes, in part, an argument pitting the needs/wants of the vigilant vs. the needs/best interests of the lazy and uninformed masses (of course, there is a gradient between those two extremes and one may be in one group one day and the other on another day, yadda yadda, but the distinction is still valid).

            So, of course, being a regular patcher, I'd prefer that they patch th
      • by markov_chain ( 202465 ) on Monday May 14, 2007 @02:43PM (#19119617)

        #3. "Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software."

        I had to post that verbatim. They're releasing new bugs in their patches.


        No, they are fixing old bugs. Old but unknown bugs, which now become known to hackers, who can go and abuse the vulnerabilities wherever they didn't get patched yet. It's pretty old news, really.
         
      • by Jimmy King ( 828214 ) on Monday May 14, 2007 @02:45PM (#19119661) Homepage Journal

        #3. "Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software."

        I had to post that verbatim. They're releasing new bugs in their patches.

        That's not how I read the response, not that how I read it is better.

        What I got from reading the entire paragraph about that was that they patch the exact issue found, but do a terrible job of making sure that the same or similar bugs are not in other similar or related parts of the code. Hackers then see the bug fix report then go look for similar exploits abusing the same bug in other parts of the program. These new exploits would not be found if they hadn't fixed and published the first one.

        This is not any better than causing new security issues with their security patches, but let's at least bash them for the right thing.
        • I'll go with your reading. Thanks!
        • This is what I thought as well. After all, this is exactly what happened with the .ANI bug. It seems pretty obvious that the company in question does a really bad job of auditing code in response to finding a new class of bugs.
      • The only argument that makes any sense to me is, "Every time we force our customers to patch systems, we run the risk of creating incompatibilities and getting slews of angry phone calls, and that'll screw up our week" and they didn't even include that one.

        Ideally the stuff should be reasonably secure out the gate; sure, they're talking about all the reasons they have for not patching after the fact, and all this stuff is true...Patching is a huge pain in the ass for everyone involved. But dammit, the amount of patching that gets done is inexcusable!

        The thing that burns me is, you know that the developers don't incorporate those "tentative" fixes into the next product release either, not until the bugs make it public. You know that there is middle management who is perfectly aware of significant poor design decisions that could be solved by a well-planned rewrite, who instead tell their people to patch and baby weak code, because the cost of doing it right would impact their deliverables.
      • He doesnt work for Microsoft by any chance does he?
      • Re: (Score:3, Interesting)

        by Jerry ( 6400 )
        Excellent summary!!!

        Point #5 is interesting in that this guy ASSUMES that because a bug hasn't been made public the crackers don't know anything about it. It is just as likely to assume that if even ONE person found the bug that person could be a cracker. Most folks don't look for vunerabilities, but crackers do.

        Microsoft may know about a vulnerability for months, or longer, before it issues a patch AND the announcement on the same day, IF it ever does. Not all holes are found by researchers. Meanwh
    • Agreed. All security holes should be fixed. I realise that with the testing effort involved in large projects that it may not be feasible to get the fixed product out instantly and may require waiting until the next planned release - if the problem is a small and unknown-to-the-public one.

      If it's a problem that people know about and could be serious, then I think it should definitely be fixed ASAP.
    • by eln ( 21727 ) on Monday May 14, 2007 @02:20PM (#19119131)
      Also, vendors should include a free pony with every software license they sell.

      Closing all vulnerabilities is not practical. In any sufficiently complex piece of software, there will be bugs and security holes. Obviously, you need to close the nasty ones, but many of these exploits are not particularly high risk. In these cases, especially if the fix would involve a major redesign or other highly disruptive solution, it may be best to just leave them alone.

      If, for example, the underlying design of your product allows for a minor, difficult to exploit security hole, it is probably not worth it to spend the time and money to redesign the product. More likely, your choices would be either a.) live with the (small) vulnerability, or b.) scrap the product entirely.

      The decision to close a security hole should be dependent on the potential impact of the hole, the urgency of the issue (are there already exploits in the wild, for example), and how many resources (time and money) it will take to fix it.
      • The headline is a bit misleading. The article is not about what you seem to think it is about. The company in question, as a standard procedure, does not release patches for many bugs that they have already created fixes for.

        Once you have developed a fix, it is completely unethical to wait indefinitely to release the fix. The longest acceptable wait is until the end of the code audit to look for similar holes in other parts of the code base. This should only take a few months.
      • Closing all vulnerabilities is not practical. In any sufficiently complex piece of software, there will be bugs and security holes. Obviously, you need to close the nasty ones, but many of these exploits are not particularly high risk. In these cases, especially if the fix would involve a major redesign or other highly disruptive solution, it may be best to just leave them alone.

        Having worked on commercial software for a few years now, I'd have to agree with the parent. All complex programs come with bu

      • Closing all vulnerabilities is not practical. In any sufficiently complex piece of software, there will be bugs and security holes.

        I hate to say this, but if your software is so complex that it is impossible to fix all the security holes... Then maybe you shouldn't make it so complex.

        I mean even something as complex as OS X has security holes, but not so many that it requires the developers to throw their hands in the air and say "Oh we give up!" at some point.

        Seriously, if your product is so complex and po
    • Re: (Score:2, Insightful)

      by RahoulB ( 178873 )
      Should Vendors Close All Security Holes?

      NO

      ACTIVEX IS A FEATURE!
  • by Gary W. Longsine ( 124661 ) on Monday May 14, 2007 @02:17PM (#19119077) Homepage Journal
    Exploit Chaining [blogspot.com] means that low risk holes can become high risk hole when combined. Patch them all. Patch them quickly.
  • by jshriverWVU ( 810740 ) on Monday May 14, 2007 @02:18PM (#19119087)
    We still research the bug and come up with tentative solutions, but we don't patch the problem. I can understand the point if it's to save time and money for other things, but if they are going to find a solution to the problem and time/money is already spent, then that is completely wasted if it isn't utilized. Plus you're risking the data by not closing a known hole or bug. Doesnt make sense.
    • Re: (Score:3, Informative)

      I can understand the point if it's to save time and money for other things, but if they are going to find a solution to the problem and time/money is already spent, then that is completely wasted if it isn't utilized.

      Patch and next version are different things. They fix the hole but don't release a patch. The fix is released in the next version.
    • >>We still research the bug and come up with ****tentative**** solutions, but we don't patch the problem.

      They come up with a tentative solution, but they don't spend the time to do full testing on it unless it's a critical security hole. Why? As mentioned, they prefer to:
      1) Use limited resources to focus on finding critical problems
      2) Not introduce new code to a known system unless necessary
      3) Not put their real-world, paying-customers-who-don't/can't-patch in danger.

      Again, the letter makes clear
    • by pikine ( 771084 )
      What you're missing is the third and fourth point, that security patches attract more attention of blackhat hackers. There are two possible reasons I can think of (that he didn't mention):

      1. Poor quality of coding tends to concentrate around a particular feature because they're all written by the same person or team. Security patches indicate poor quality of code.
      2. Security patches provide some insight for reverse engineering, so this allow someone to find more vulnerabilities around that p
  • Their arguments: 1-5 (Score:5, Informative)

    by Palmyst ( 1065142 ) on Monday May 14, 2007 @02:21PM (#19119149)
    As is too common, the ./ summary doesn't have the relevant portions of the article under discussion, so let me try to summarize the main points of their argument.

    1. It is better to focus resources on high risk security bugs.
    2. We get rated better (internally and externally) for fixing publically known problems.
    3. Hackers usually find additional bugs by examining patches to existing bugs, so a patch could expose more bugs than fixes are available for.
    4. If we disclose a bug and fix it, it just escalates the "arms race" with the hackers. Better to keep it hidden.
    5. Not all customers immediately patch. So by announcing a patch to previously unknown to the public bug, we actually exponentially increase the chances of that bug being exploited by hackers.
  • by KitsuneSoftware ( 999119 ) on Monday May 14, 2007 @02:21PM (#19119151) Homepage Journal
    It could work as well as the normal method, but if it catches on, it will mostly be used as an excuse to not do anything until publicly shamed. Call me cynical.
  • Bugs should be fixed (Score:5, Interesting)

    by Anarchysoft ( 1100393 ) <anarchy@anarchys ... .com minus berry> on Monday May 14, 2007 @02:22PM (#19119167) Homepage

    "Our company spends significantly to root out security issues," says the reader. "We train all our programmers in secure coding, and we follow the basic tenets of secure programming design and management. When bugs are reported, we fix them. Any significant security bug that is likely to be high risk or widely used is also immediately fixed. But if we internally find a low- or medium-risk security bug, we often sit on the bug until it is reported publicly. We still research the bug and come up with tentative solutions, but we don't patch the problem."
    I don't believe this is a prudent approach. Bugs often cause (or mask) more problems than the issue causing the bug to be fixed. In other words, fixing a bug causing a known issue can also fix several unknown issues. Without a significant reason to not do so (such as a product that has beene completely replaced in a company with very limited resources,) it is irresponsible to not fix bugs. The debatable point is how long small bugs should be allowed to collect before issuing a point release.
    • fixing a bug causing a known issue can also fix several unknown issues

      Just as often the reverse applies. A bug often shadows other bugs. Take away the main bug and there's just another right behind it which might even be worse. This is why you don't just "shoot from the hip" when fixing bugs.

  • Seems like a pretty dumb move to me. You have the choice of, A: Patching immediately, costing you a few hours of time from a couple of your employees or B: Hoping that it won't be a big risk effectively betting a few hours of time against the possibility of a huge security breach and the corresponding bad press that comes with that.

    Seems like a small patch wouldn't be that much trouble and would avoid much larger problems...
    • by The_Wilschon ( 782534 ) on Monday May 14, 2007 @02:39PM (#19119539) Homepage
      Low-risk does not mean easy to fix. Sometimes, a bug might be a very low-risk bug, but demand immense amounts of time to find and fix. For instance, sometimes I might be writing a program, and at some point, it begins crashing unpredictably, but very rarely. I know that there is a bug, but I have no idea what the trigger is, I have no idea which part of the code contains the bug, and I have no idea how to fix it. Since the MTBF is (say) 3 months, and (say) the code is not long-running (like a daemon or a kernel), it is probably not worth finding and fixing the bug.

      Now, that's bugs, which is a wider category than security holes. So, suppose that instead of crashing, it very rarely and briefly enters a state in which a particular sequence of bytes sent to it via the net can cause it to execute arbitrary code. Furthermore, suppose the program should never be running as root, so the arbitrary code is nearly useless. This is a low risk security hole, and probably not worth patching.

      Could take hundreds of man-hours to find the cause, and perhaps even longer to fix. Probability of ever seeing this exploited is very very low. Should it then be patched?
      • Now, that's bugs, which is a wider category than security holes. So, suppose that instead of crashing, it very rarely and briefly enters a state in which a particular sequence of bytes sent to it via the net can cause it to execute arbitrary code. Furthermore, suppose the program should never be running as root, so the arbitrary code is nearly useless. This is a low risk security hole, and probably not worth patching.

        Could take hundreds of man-hours to find the cause, and perhaps even longer to fix. Probabi
      • Re: (Score:3, Informative)

        by dvice_null ( 981029 )
        > it begins crashing unpredictably, but very rarely. I know that there is a bug, but I have no idea what the trigger is, I have no idea which part of the code contains the bug

        You sound like a person who needs a good debugger. Take gdb for example. You can ask your customer to send in the core dump file, which the program produces during the crash, then you load this core dump into gdb and not only will you get the exact location of the crash, you can also check where it was called and what values each va
        • Re:Procrastination? (Score:5, Interesting)

          by mce ( 509 ) on Monday May 14, 2007 @04:02PM (#19121141) Homepage Journal

          Your example is way to simplistic. I've seen core dump cases in which it was perfectly clear why it was crashing: the data structure got into a logically inconsistent state that it should never be in. The question is how and when. In case of big data structures (in some of the cases I've had to deal with: hundreds of thousands of objects, built gradually and modified heavily during several hours) finding the exact sequence that causes the inconsistency can be a nightmare. Plus, it might have been sitting around in this state for a long time before the program actually enters a path that leads to a crash.

          Also, the worst type of bug is the Heisenbug: those that go away as soon as you enable debugging, or add even just a single line of extra code to monitor something while the stuff is running. I've seen my share of those as well. Sometimes persistance pays off and you find the root cause within hours or days, but sometimes reality forces you to give up. It's no use spending five weeks fruitlessly looking for a rare intermittent bug triggered by a convoluted internal unit test case if at the same time easily tracable bugs are being reported by paying users that need a solution within a week.

    • by LWATCDR ( 28044 )
      Except for.
      C: the fix may cause a bug or other issues. Something may stop working.

      It also depends on the security problem. If it is a local exploit then it may not be worth fixing right then.

      I think everyone is confused here. These are not exploits that they have closed and just haven't decided to end out the patch. These are exploits that the haven't created the patch for. A security team as limited resources. They may have x exploits so it is only logical to fix the most critical first.
      • They may have x exploits so it is only logical to fix the most critical first.

        If I believed that they would actually fix all the bugs before moving on to the next revision and adding new features, I might agree with you.

        What ends up happening is that a whole pile of bug fixes end up in the next revision but never fixed in the last product, as a means to force you to upgrade. Over time the company gets more and more lax about bugfixes because those bugfixes guarantee their revenue stream! Then they get a r

    • Re: (Score:2, Insightful)

      You have the choice of, A: Patching immediately, costing you a few hours of time from a couple of your employees or B: Hoping that it won't be a big risk effectively betting a few hours of time against the possibility of a huge security breach and the corresponding bad press that comes with that.

      Not that simple. Developing a patch does not fix a security hole. Releasing a patch does not fix a security hole. Applying a patch fixes a security hole, if all goes well. When you combine the fact that the number of holes in existence is multiplied by the number of installations, with the fact that the development team very rarely has any power over when patches are actually applied, security through obscurity doesn't look so cut-and-dried naïve versus publicizing your holes by releasing patches.

      No

  • Yup (Score:2, Insightful)

    by derEikopf ( 624124 )
    Yup, it's not about quality software, it's about money. Hardly anyone makes software anymore because they want to or because they like making quality software...they'll just do the bare minimum they can to maximize profit.
  • Author is Right (Score:5, Interesting)

    by mpapet ( 761907 ) on Monday May 14, 2007 @02:24PM (#19119217) Homepage
    Pre-emptive disclosure works against the typical closed source company.

    Option 1:
    Exploit is published, patch is delivered really quickly. Sysadmin thinks, "Those guys at company X are on top of it..." PHB will say the same.

    Option 2:
    Unilaterally announce fixes, make patches available. Sysadmin doesn't bother to read the whole announcement and whines because it makes work she doesn't understand or think is urgent. PHB thinks "Gee company X's software isn't very good, look at all the patches..."

    The market for secure software is small even smaller if you add standards compliance. Microsoft is a shining example of how large the market is for insecure, non-standard software.
  • by zappepcs ( 820751 ) on Monday May 14, 2007 @02:24PM (#19119219) Journal
    Examples:
    Not likely to be fixed completely - In some ways, Windows is a security hole
    Could be fixed if escalates - password strength and use
    Should be fixed - Lack of any authorization requirements etc.

    If you remember the Pinto car-b-ques, there is a money factor to think about. Since most standard computing systems are not life-critical, some bugs can be left for later. Some bugs you might know about but they are not in your code such as those shipped with the networking stack of the RTOS that you use for an embedded product. An insecure FTP client on an embedded machine that has no access to other machines or sensitive material is not terribly bad.

    On the other hand, if the machine can be compromised and allow the attacker access to other machines... that needs to be fixed.
    • Why was this modded as troll - just because of the statement, "In some ways, Windows is a security hole"?

      There is some truth in this, IMHO. An example of how one could consider this true is that Microsoft no longer offers patches for older versions of Windows. I understand it is at the users disgression to keep up to date with costly Windows updates and to keep up with patches, but I still maintain that there is some truth to this statement, even if it is only a little bit of truth. Furthermore, any OS

  • A car analogy... (Score:4, Insightful)

    by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Monday May 14, 2007 @02:24PM (#19119223) Homepage Journal

    Was not it GM, that lost millions of dollars a few years ago in a lawsuit brought by people (and their kin), whose car was rear-ended on a toll plaza and exploded in flames?

    GM's arguments, that making the car's fuel-tank more protected was too expensive for the modicum of additional safety that would've provided, were — for better or worth — ignored by the jury...

    In other words, you may not deem a security hole to be large compared with the expense of pushing out another patch, but if somebody gets hurt, and their lawyer subpoenas your internal e-mails on the subject, you may well be out of business.

    • Re:A car analogy... (Score:5, Interesting)

      by LWATCDR ( 28044 ) on Monday May 14, 2007 @02:41PM (#19119563) Homepage Journal
      It was Ford and it was the Pinto. The problem is.
      1. The Pinto even before the "fix" didn't have the highest death rate in it's class. Other small cars had the same death per mile or worse.
      2. The NTSA had the dollars per rate figure in the national standards for safety and Ford referenced in in their internal documentation which the lawyer used in the case.
      3. Had Ford not identified the risk that a bolt posed to the fuel tank and documented it they probably wouldn't have lost so big in court.

      Just thought I would try and kill a myth with some truth. Probably will not work but it is worth a shot.

      • Maybe the Pinto wasn't the deadliest small car in '79. It was still the only one, AFAIK, that exploded when rear-ended. It wasn't just that the Pinto's flaw was deadly, it was that it was scary, and deadlier than it should have been.
        And yes, Ford's knowing the fuel-tank explosions could happen counted against it if it didn't try to recall affected Pintos.
        Ford, forgetting and repeating history, put exploding gas tanks in several model years of Crown Victorias recently. Again, there was only a slim chanc
      • Had Ford not identified the risk that a bolt posed to the fuel tank and documented it they probably wouldn't have lost so big in court.

        Yes, had there not been evidence that they knew that there was a problem, knew how significant it was and how easy to fix, and chose not to do so, things would have gone better for them in court.

        Not sure how that is part of a "problem", either with the popular perception's relation to the facts (since its part of the popular perception and central to the reason its held up a

    • by 6Yankee ( 597075 ) on Monday May 14, 2007 @02:52PM (#19119803)
      for better or worth

      Let me guess - you're a LISP programmer?
    • 4-inch crack in a chastity belt by applying multiple layers of Armor All, or maybe Poly-glycote...., butt, on the other hands....

      (CAPTCHA: DISCREET)
  • Why would a corporation choose to not release a patch for a known security vulnerability, even if it is minor? Wouldn't it be better PR to always release the patch before the exploit comes out? This sounds totally unethical to me. They are trying to take an ostrich approach to security: the bug doesn't exist unless the customer can see it.

    Besides, aren't there liability issues with knowingly shipping a product with undisclosed defects? What if they underestimate the severity of a vulnerability? How can they
    • Re: (Score:3, Insightful)

      by Dan Ost ( 415913 )
      Besides, aren't there liability issues with knowingly shipping a product with undisclosed defects?

      The fix the problem as soon as they discover it. The next release of the product does not have the problem. If the
      problem becomes public before the next release, then they immediately issue the patch for it and hope that people
      patch.

      As long as they release often enough that the fixes are largely in place before the problems are found, I have no
      issue with this. It actually seems responsible since it posses less
      • IANAL, but doesn't this strategy leave them open to lawsuits from the customers who are affected by a zero-day exploit? What if one of these "minor" bugs actually enables a DoS? It would seem that the affected customers could sue for damages even if the company had underestimated the severity of the bug.
        • by Dan Ost ( 415913 )
          I don't think any commercial vendor lets you use their software without waiving your rights to sue them if you have a less than perfect experience with their product. I don't know if such a thing would hold up in court, but the fact that these things never seem to go to court implies that they are at least minimally binding.
  • by DebianDog ( 472284 ) <dan&danslagle,com> on Monday May 14, 2007 @02:25PM (#19119255) Homepage
    The one you hope for: Someone finds and public announces a problem. You team looks "quick to act" deploys a solution.

    That other one: Someone exploits the bug to a degree you and your team never considered and your user community is devastated.
    • That other one: Someone exploits the bug to a degree you and your team never considered and your user community is devastated.
      And they initiate legal proceedings based on the policy of not fixing known security bugs.
  • yes (Score:3, Insightful)

    by brunascle ( 994197 ) on Monday May 14, 2007 @02:26PM (#19119261)
    all known security bugs should be fixed, but low-risk non-public ones can be low priority. we cant expect any vendor to send a patch each and every time they find a security bug, but once they find one the next version they release damn well better have it patched.
  • Proposal (Score:2, Interesting)

    I think developers and companies should think long and hard about how such policies would be received if the end-user were presented with them in plainspeak.

    "Welcome, JoeUser. This is WidgetMax v2.0.3. As a reminder, this product may contain security holes or exploits that we already know about and don't want to spend the money to fix because we internally classify them as low-to-medium risk."

    I'm not saying it's necessarily wrong -- budgets are finite -- but keeping policies internal because of how they w
  • A security hole is a bug, plain and simple. There's no excuse for deliberately not fixing a bug. Now, you can make an argument that if the bug's minor and not causing customer problems you should hold the fix for the next regularly-scheduled release, but that's about it. The argument that unannounced holes don't seem to be being exploited is particularly disingenuous. People aren't looking for exploits of holes they don't know about. It's not surprising, then, that few people are reporting problems they are

  • by Dracos ( 107777 ) on Monday May 14, 2007 @02:35PM (#19119437)

    If an automaker or toy manufacturer didn't issue a recall on a minor safety issue immediately, they'd get tons of bad press. But a software company can sit on just about any security bug indefinitely (I'm looking at you, Microsoft) and few people care.

    I suspect 2 factors are at work here:

    1. The general public doesn't care about software security because it doesn't effect their daily lives
    2. There's no "think of the children!" emotional aspect to software

    #2 probably won't ever happen industry wide, and until the public understands how much impact software security can have, they won't care.

  • by holophrastic ( 221104 ) on Monday May 14, 2007 @02:36PM (#19119453)
    We do the same thing. Every company has limited resources, and every decision is a usiness priority decision. So the decision is always between new features and old bugs.

    Outside of terribly serious security holes, security holes are only security holes when they become a problem. Until then, they are merely potential problems. Solving potential problems is rarely a good idea.

    We're not talking about tiny functions that don't consider zero values. We're talking about complex systems where every piece of programming has the potential to add problems not only to the system logic, but also to add more security holes.

    That's right, I said it -- security patches can cause security holes.

    It is our standard practice not to touch things that are working. Not every application is a military application.

    I'll say it again. Not every application is a military application.

    Your front door has a key-lock. That's hardly secure -- they are easily picked. But it's secure enough for most neighbourhoods.

    So the question with your software is: when does this security hole matter, and how does it matter. If it's only going to matter when a malicious person attacks, then the question comes down to how many attackers you have. And if those attackers are professional, you might as well make their life easier, because they'll get in eventually in one way or another -- I'd rather know how they got in and be able to recover.

    How it matters. If it reveals sensitive information, it matters. If it causes your application to occasionally crash, when an operator is near-by, then it doesn't matter.

    There are omre important things to work on -- and many of these minor security holes actually disappear with feature upgrades -- as code is replaced.
    • If it aint FOKE don't BRIX it? EESHSH

      I suppose that is why some software companies use the S/W equiv of Poly-Razz-Ma-Tazz... overwhelm the user with a plethora of "features" so that metrics look better when ONE otherwise major bug would stand out in a small-feature program/app. But, then many customers (and devs/publishers) cannot see the forest for the trees.
      • Re: (Score:2, Insightful)

        You know something, I was planning on rebutting, but you're entirely correct. But you make it sound like a bad thing. Some of our clients are educated and experienced in the technical world. Many of them choose to see the forest from above -- and so they demand a taller tree first.

        When given the option between a week of effort to fix a security hole (for free), or a week of effort to build a new feature (for cost), not all but most of our clients prefer the latter. They would rather grow their business
    • If it causes your application to occasionally crash, when an operator is near-by, then it doesn't matter.

      Says you. Try deploying a restaurant point-of-sale system that only crashes "occasionally". You'll have managers just LOVING your software when it goes down during rush hour. It matters...
    • >Solving potential problems is rarely a good idea.

      It worked for OpenBSD, though conceivably they could have gotten the same results with less labor.

      Their policy was to audit code looking for problems, and then fixing every problem they found without even checking whether it was exploitable.

      Interestingly, one result was that OpenBSD became unusually difficult to crash.

      Not many projects are willing to set up their priorities the way the OpenBSD team has, and there are reasons.
  • by gurps_npc ( 621217 ) on Monday May 14, 2007 @02:38PM (#19119521) Homepage
    That is, if you have a patch, then you should fix it.

    I could see not waiting till your next regular patch, so as to avoid bringing it to the attention of the hackers.

    But the rest of his arguments are pretty crappy.

  • From TFA:

    "It's possible that if anti-virus software had never been created, we wouldn't be dealing with the level of worm and bot sophistication that we face today."

    And if we didn't use antibiotics we wouldn't be seeing the current evolutionary pace of biological malware.

    TFA presents some points for discussion, but this doesn't strike me as one of them.

    Did I really just type 'biological malware?'

    Regards.
  • ...doesn't mean it is the key to success.

    I really want to know what company the "reader" works for, so I can add them to my shit list. I don't want to support such abhorrent security practices.

    And remember: Friends don't let friends buy Microsoft.
  • IANAL, but if a company suffers a significant financial loss due to a bug that the vendor knew about but did not patch, does that not open them up for big time law suits?
  • selling a product which you know does not meet specifications in order to derive a benefit is a crime called "fraud".

    Trying to artificially inflate your bugfix ratings or trying to save money is a benefit.

    I dont see how this company expects to evade legal CRIMINAL responsibility when someone is harmed because of a pre-existant security problem they knew about but did not disclose at the time of sale.

  • All problems are theoretical until they happen to you. Before they happen to you they fall into two categories:

    It can never happen to you Probability = 0.0
    It might happen to you Probability 0.0 x 100.0

    The problem is that we don't know this when we hear of a problem. All we hear about is the theoretical problem and the probability of a theoretical problem being theoretically true = 100.0%. If we had a way to neatly classify vulnerabilities into both the probability of occurence and the probability that som
  • When you work people 80+ hour week you get a lot more bugs in the code.
    When you rush things out to meet a clueless PHB dead line things get passed over.
    When you cut funding some times you don't have the hardware to do full testing and you end up with test code on the sever or a desktop turned in to a test sever.
    When you waste the codes time on crap like TPS reports useless meetings to tall about way things are running late that does not help.
  • The problem with vendors closing security holes is, they often can't keep up with the volume of it.

    In the case of MS, how often do they close one hole only to open up another. I don't want to throw OS' around, but look at team OpenBSD, regardless of the smug attitudes, you have to give Theo and his group credit. They don't release for the sake of keeping up with the Jones'. They're methodical and accurately screening and scrutinizing what their OS does, what its supposed to do and how it does it.

    The i
  • by pestilence669 ( 823950 ) on Monday May 14, 2007 @02:54PM (#19119835)
    The place I worked for was a security company. They had no automated unit testing and never analyzed for intrusions. You'd be shocked to find out how many holes exist on devices people depend on to keep them safe. The employees took it upon themselves (subverted authority) to patch our product. Security problems, even on security hardware, were not "priority" issues.

    We too "trained" our coders in the art of secure programming. The problem, of course, is that we were also training them in basic things like what a C pointer is and how to not leak memory. Advanced security topics were over their head. This is the case in 9 out of 10 places I've worked. The weak links, once identified, can't be fired. No... these places move them to critical projects to "get their feet wet."

    At the security giant, training only covered the absolute basics: shell escaping and preventing buffer overflows with range checking. The real problem is that only half of our bugs were caused by those problems. The overwhelming majority were caused by poor design often enabled by poor encapsulation (or complete lack of it).

    There were so many use cases for our product that hadn't been modeled. Strange combinations of end-user interaction had the potential to execute code our appliance. Surprisingly, our QA team's manual regression testing (click around our U.I.) never caught these issues, but did catch many typos.

    I don't believe security problems are inevitable. I've been writing code for years and mine never has these problems (arrogant, but mostly true). I can say, with certainty, than any given minor-version release has had 1,000's of high-quality tests performed and verified. I use the computer, not people... so there's hardly any "cost" to do so repeatedly.

    I run my code through the paces. I'm cautious whenever I access RAM directly. My permissions engines are always centralized and the most thoroughly tested. I use malformed data to ensure that my code can always gracefully handle garbage. I model my use cases. I profile my memory usage. I write automated test suites to get as close to 100% code coverage as possible. I don't stop there. I test each decision branch for every bit of functionality that modifies state.

    Aside from my debug-time assertions, I use exception handling quite liberally. It helps keep my code from doing exceptional things. Buffer overflows are never a problem, because I assume that all incoming data from ANY source should be treated as if it were pure Ebola virus. I assume nothing, even if the protocol / file header is of a certain type.

    Security problems exist because bad coders exist. If you code and you know your art form well, you don't build code that works in unintended ways. Proper planning, good design, code reviews, and disciplined testing is all you need.
    • by joggle ( 594025 )

      When would you consider that you are using RAM directly? Do you mean when you are using low-level functions like malloc() and free() or any time you access an object on the heap? Heck, even the stack is in RAM for that matter. It makes sense to be cautious when allocating memory and accessing it without a memory manager, but when using higher-level languages like Java or C# it doesn't seem to be as critical (since an exception will be thrown which is easily caught when trying to access invalid memory areas)

    • Re: (Score:3, Insightful)

      Proper planning, good design, code reviews, and disciplined testing is all you need.

      Unfortunately there seems to be very few companies willing to budget (in time or resources) for any more than two of these, let alone all four. And even more unfortunately, the past 40 years of commercial software seem to suggest that such miserliness has been a profitable decision.

  • That approach allows hackers to exploit known-about but unreported (and therefore unfixed) loopholes potentially for ages.

    Please give me the name of this guy's company so I can avoid all their products.
  • by postbigbang ( 761081 ) on Monday May 14, 2007 @02:57PM (#19119923)
    They say: if you know about it, you're obliged to fix them. And then you kick your QA department's butts around the corridor several times. If your customers are your software testers, then you're business model is likely corrupt. And while there are a number of coders that will complain that it was the libs, or the other guy's fault, ultimately, a responsible organization takes ownership of their faults, just like humans should.
  • Litmus Test (Score:3, Insightful)

    by Fnord666 ( 889225 ) on Monday May 14, 2007 @03:05PM (#19120089) Journal
    Are you willing to indemnify your users for any and all losses suffered by them due to a flaw/bug which you knew about but chose not to patch? If not, then patch it!
  • Fixing holes (Score:2, Insightful)

    by kupekhaize ( 220804 )
    The problem is that if you've discovered a security hole; chances are someone else has as well. Just because a problem hasn't been reported to your company doesn't mean that it is unknown.

    History shows that there are lots of black hats that will sit on security breaches/expploits/bugs/etc and exploit them for their own end rather then reporting them to a company. Breaches in security should be patched as soon as they are discovered. If 1 person found the bug/hole/exploit/whatever, that means another person
    • It's a whole different thing to find a hole with the source code, than without it.

      But I'm not sure on the practice, what happens when a disgruntled employee leaves? Did he have access to that information?
  • Yes (Score:3, Interesting)

    by madsheep ( 984404 ) on Monday May 14, 2007 @03:14PM (#19120283) Homepage
    Yes, yes they should patch them all. Personally it'd eat away at me knowing I could spending a few minutes, hours, or days to fix a vulnerability in my software. I don't think I could take pride in what I do if I just leave crap like this around because I don't have to fix it and don't think it's important unless someone finds it publicly. I'm glad they fix the HIGHs (however they rate this.. who knows?) and the publicly disclosed ones. But why not fix the small ones as you find them? It's a little bit of embarassment every time an issue is found. This is one less piece of embarassment. However, maybe it's the quasi-perfectionist in me, I couldn't imagine not fixing this stuff.
    • by pammon ( 831694 )
      > But why not fix the small ones as you find them? Because a fix isn't always obvious, or may be risky, or may impact a feature, etc. If the security problem is indeed "small," then the fix may not be worth it.
  • What exactly is a security hole and what exactly is a feature?

    Simple truth is - for any software with sufficient amount of users there will be users that will manage to find even most strange "features" and often exploit to their advantage (as in - it benefits them actially).
    As long as you are facing the unknown (users that might actuall think that any particular issue is a feature and use them) versus known ("yes, there seems to be a problem with IP parsing if using nonstandart syntax, but nobody has compl
  • They should absolutely patch bugs when discovered, regardless of classified severity. They should take the OpenBSD approach and regularly and aggressively audit their code. I think customers that have paid good money for a product, deserve one as bug free as possible. OpenBSD is an OS that you get for free that has far fewer flaws and is proactive about letting its users know when a bug does crop up. If a free/open source project with an even tighter budget, there is absolutely no reason a commerical
  • by drfuchs ( 599179 ) on Monday May 14, 2007 @03:30PM (#19120577)
    The article and responses miss an important point: patches of any kind are risky! And not just because they might introduce a new security flaw, but more generally because they may break some feature or another. In applications with millions of lines of code, and where the cost of doing a patch release amortized over all customers is millions of dollars, it can make lots of sense to just roll a fix into the next planned upgrade release. That way you get a complete Q/A and customer beta-test cycle to increase the confidence level of the fix.
  • Of course not! (Score:4, Insightful)

    by pammon ( 831694 ) on Monday May 14, 2007 @03:59PM (#19121081)

    I'm shocked by how many people answer this with an unqualified "Yes." That's not realistic at all.

    Here's an example. An app asks for your password. That password gets written to memory for a period of time. During that time, the page containing the password may get swapped to disk. The system may then crash or lose power, leaving the password on disk. I could then boot into another OS, read the swap file, and recover your password. Unlikely, but possible.

    There, I "found" a security hole. Want to patch it? You could try to make every app that uses a password store it in wired (not swappable) memory - but performance will suffer (and good luck doing that in every app). You could also dynamically encrypt all data that gets written to the swap file, and decrypt it when read, at the cost of performance again.

    Are you willing to trade performance in every app to defend against this mostly theoretical vulnerability? Probably not. Security is about tradeoffs. Welcome to the realities of software development.

    • Re: (Score:3, Informative)

      by DaleGlass ( 1068434 )
      Bad example.

      Ever heard of mlock [die.net]? You don't need to make the whole application non-swappable, just the page that contains the password. And the call is trivial to use.

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...