Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security IT

Should Vendors Close All Security Holes? 242

johnmeister writes to tell us that InfoWorld's Roger Grimes is finding it hard to completely discount a reader's argument to only patch minimum or low security bugs when they are publicly discovered. "The reader wrote to say that his company often sits on security bugs until they are publicly announced or until at least one customer complaint is made. Before you start disagreeing with this policy, hear out the rest of his argument. 'Our company spends significantly to root out security issues,' says the reader. 'We train all our programmers in secure coding, and we follow the basic tenets of secure programming design and management. When bugs are reported, we fix them. Any significant security bug that is likely to be high risk or widely used is also immediately fixed. But if we internally find a low- or medium-risk security bug, we often sit on the bug until it is reported publicly. We still research the bug and come up with tentative solutions, but we don't patch the problem.'"
This discussion has been archived. No new comments can be posted.

Should Vendors Close All Security Holes?

Comments Filter:
  • by Richard McBeef ( 1092673 ) on Monday May 14, 2007 @03:21PM (#19119139)
    I can understand the point if it's to save time and money for other things, but if they are going to find a solution to the problem and time/money is already spent, then that is completely wasted if it isn't utilized.

    Patch and next version are different things. They fix the hole but don't release a patch. The fix is released in the next version.
  • Their arguments: 1-5 (Score:5, Informative)

    by Palmyst ( 1065142 ) on Monday May 14, 2007 @03:21PM (#19119149)
    As is too common, the ./ summary doesn't have the relevant portions of the article under discussion, so let me try to summarize the main points of their argument.

    1. It is better to focus resources on high risk security bugs.
    2. We get rated better (internally and externally) for fixing publically known problems.
    3. Hackers usually find additional bugs by examining patches to existing bugs, so a patch could expose more bugs than fixes are available for.
    4. If we disclose a bug and fix it, it just escalates the "arms race" with the hackers. Better to keep it hidden.
    5. Not all customers immediately patch. So by announcing a patch to previously unknown to the public bug, we actually exponentially increase the chances of that bug being exploited by hackers.
  • by Lord_Slepnir ( 585350 ) on Monday May 14, 2007 @03:41PM (#19119569) Journal
    #3. "Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software."

    I had to post that verbatim. They're releasing new bugs in their patches.

    Partially true. By doing a bindiff between the old binaries and new binaries, they can see things like "Interesting, they're now using strncmp instead of strcmp. Let's see what happens when I pass in a non-null terminated buffer..." or "they're now checking to make sure that parameter isn't null" or whatever.

    The defects were there before, but the patches give hackers a pointer that basically says "Look here, this was a security hole". Since end-users are / were really bad about patching their systems in a sane time frame, this gives the hackers a window of opportunity to exploit an exploit before they all patch up.

  • by Jimmy King ( 828214 ) on Monday May 14, 2007 @03:45PM (#19119661) Homepage Journal

    #3. "Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software."

    I had to post that verbatim. They're releasing new bugs in their patches.

    That's not how I read the response, not that how I read it is better.

    What I got from reading the entire paragraph about that was that they patch the exact issue found, but do a terrible job of making sure that the same or similar bugs are not in other similar or related parts of the code. Hackers then see the bug fix report then go look for similar exploits abusing the same bug in other parts of the program. These new exploits would not be found if they hadn't fixed and published the first one.

    This is not any better than causing new security issues with their security patches, but let's at least bash them for the right thing.
  • by 644bd346996 ( 1012333 ) on Monday May 14, 2007 @03:49PM (#19119759)
    The whole point of the article is that the company in question refrains from releasing a patch, even when they have a fix ready. This is not prioritization.
  • by postbigbang ( 761081 ) on Monday May 14, 2007 @03:57PM (#19119923)
    They say: if you know about it, you're obliged to fix them. And then you kick your QA department's butts around the corridor several times. If your customers are your software testers, then you're business model is likely corrupt. And while there are a number of coders that will complain that it was the libs, or the other guy's fault, ultimately, a responsible organization takes ownership of their faults, just like humans should.
  • Re:Procrastination? (Score:3, Informative)

    by dvice_null ( 981029 ) on Monday May 14, 2007 @04:42PM (#19120785)
    > it begins crashing unpredictably, but very rarely. I know that there is a bug, but I have no idea what the trigger is, I have no idea which part of the code contains the bug

    You sound like a person who needs a good debugger. Take gdb for example. You can ask your customer to send in the core dump file, which the program produces during the crash, then you load this core dump into gdb and not only will you get the exact location of the crash, you can also check where it was called and what values each variable had.

    Allow me to demonstrade how easy this is:

    # Enable core dumbs (if not enabled by default)
    $ ulimit -c unlimited

    # Execute program and notice it crash ( this could be done by the customer also, with proper instructions or automation )
    $ ./a.out
    Floating point exception (core dumped)

    # We now have the core dump and a debug version of the program. Start the debugger
    $ gdb ./a.out core

    (snip)

    Program terminated with signal 8, Arithmetic exception.
    #0 0x080483eb in main () at b.cpp:5
    5 int c = a/b;
    (gdb) info locals
    a = 4
    b = 0
    c = -1209610252
    d = -1208343920
    (gdb)

    And there we have it. The programs crashes in b.cpp, line 5 when assigning a/b into c. info locals reveals that b is zero, which is the cause of the crash.

    Oh, and the source code for the b.cpp:

    int main()
    {
        int a = 4;
        int b = 0;
        int c = a/b; // line 5
        int d = c;
        return 0;
    }
  • Re:Of course not! (Score:3, Informative)

    by DaleGlass ( 1068434 ) on Monday May 14, 2007 @07:07PM (#19123049) Homepage
    Bad example.

    Ever heard of mlock [die.net]? You don't need to make the whole application non-swappable, just the page that contains the password. And the call is trivial to use.

If you want to put yourself on the map, publish your own map.

Working...