Should Vendors Close All Security Holes? 242
johnmeister writes to tell us that InfoWorld's Roger Grimes is finding it hard to completely discount a reader's argument to only patch minimum or low security bugs when they are publicly discovered. "The reader wrote to say that his company often sits on security bugs until they are publicly announced or until at least one customer complaint is made. Before you start disagreeing with this policy, hear out the rest of his argument. 'Our company spends significantly to root out security issues,' says the reader. 'We train all our programmers in secure coding, and we follow the basic tenets of secure programming design and management. When bugs are reported, we fix them. Any significant security bug that is likely to be high risk or widely used is also immediately fixed. But if we internally find a low- or medium-risk security bug, we often sit on the bug until it is reported publicly. We still research the bug and come up with tentative solutions, but we don't patch the problem.'"
Re:Waste of time and money (Score:3, Informative)
Patch and next version are different things. They fix the hole but don't release a patch. The fix is released in the next version.
Their arguments: 1-5 (Score:5, Informative)
1. It is better to focus resources on high risk security bugs.
2. We get rated better (internally and externally) for fixing publically known problems.
3. Hackers usually find additional bugs by examining patches to existing bugs, so a patch could expose more bugs than fixes are available for.
4. If we disclose a bug and fix it, it just escalates the "arms race" with the hackers. Better to keep it hidden.
5. Not all customers immediately patch. So by announcing a patch to previously unknown to the public bug, we actually exponentially increase the chances of that bug being exploited by hackers.
Re:The summary missed those parts. (Score:5, Informative)
I had to post that verbatim. They're releasing new bugs in their patches.
Partially true. By doing a bindiff between the old binaries and new binaries, they can see things like "Interesting, they're now using strncmp instead of strcmp. Let's see what happens when I pass in a non-null terminated buffer..." or "they're now checking to make sure that parameter isn't null" or whatever.
The defects were there before, but the patches give hackers a pointer that basically says "Look here, this was a security hole". Since end-users are / were really bad about patching their systems in a sane time frame, this gives the hackers a window of opportunity to exploit an exploit before they all patch up.
Re:The summary missed those parts. (Score:4, Informative)
That's not how I read the response, not that how I read it is better.
What I got from reading the entire paragraph about that was that they patch the exact issue found, but do a terrible job of making sure that the same or similar bugs are not in other similar or related parts of the code. Hackers then see the bug fix report then go look for similar exploits abusing the same bug in other parts of the program. These new exploits would not be found if they hadn't fixed and published the first one.
This is not any better than causing new security issues with their security patches, but let's at least bash them for the right thing.
Re:The summary missed those parts. (Score:3, Informative)
Then there's the ethics and responsibility args (Score:3, Informative)
Re:Procrastination? (Score:3, Informative)
You sound like a person who needs a good debugger. Take gdb for example. You can ask your customer to send in the core dump file, which the program produces during the crash, then you load this core dump into gdb and not only will you get the exact location of the crash, you can also check where it was called and what values each variable had.
Allow me to demonstrade how easy this is:
# Enable core dumbs (if not enabled by default)
$ ulimit -c unlimited
# Execute program and notice it crash ( this could be done by the customer also, with proper instructions or automation )
$
Floating point exception (core dumped)
# We now have the core dump and a debug version of the program. Start the debugger
$ gdb
(snip)
Program terminated with signal 8, Arithmetic exception.
#0 0x080483eb in main () at b.cpp:5
5 int c = a/b;
(gdb) info locals
a = 4
b = 0
c = -1209610252
d = -1208343920
(gdb)
And there we have it. The programs crashes in b.cpp, line 5 when assigning a/b into c. info locals reveals that b is zero, which is the cause of the crash.
Oh, and the source code for the b.cpp:
int main()
{
int a = 4;
int b = 0;
int c = a/b;
int d = c;
return 0;
}
Re:Of course not! (Score:3, Informative)
Ever heard of mlock [die.net]? You don't need to make the whole application non-swappable, just the page that contains the password. And the call is trivial to use.