Hyper-Threading, Linus Torvalds vs. Colin Percival 396
OutsideIn writes "The recent Hyper-Threading vulnerability
announcement has generated a fair amount of discussion since it was released. KernelTrap
has an interesting article quoting Linux creator Linus Torvalds who recently compared the vulnerability to similar issues with early SMP
and direct-mapped caches suggesting, "it doesn't seem all that worrying in real life." Colin Percival,
who published a recent paper on the vulnerability,
strongly disagreed with Linus' assessment saying, "it is at
times like this that Linux really suffers from having a single dictator in charge; when Linus doesn't understand a problem,
he won't fix it, even if all the cryptographers in the world are standing against him.""
Open Source, YOU win! (Score:2, Informative)
At least Linus.... (Score:2, Informative)
From the original article [daemonology.net]
Re:Single Dictator? (Score:3, Informative)
Paper author just wants attention. (Score:4, Informative)
All said cyrptographers should buy a non hyperthreaded cpu, or turn it off.
I mean if you use GPG [gnupg.org] on most machines, it will issue you a warning about Insecure Memory. That is someone could potentially harvest data from disused pages in memory.
These cryptographers would use a secure memory system. I'm happy hoping that MI6 isn't running a remote memory exploit on my box.
Comment removed (Score:5, Informative)
Re:Paper author just wants attention. (Score:2, Informative)
Fix the applications (Score:5, Informative)
This appears to be an application bug, not a kernel one. The kernel never claims to completely isolate processes from one another; though there are memory protections, there are loads of ways that processes can observer each other's actions. This is just a particularly high-resolution one.
The real "bug" here, IMO, is that openSSL believes that no other process can observe anything about its secret computations. Timing attacks against RSA have been known for some time, particularly with regard to modular exponentiation.
It wouldn't be too hard to make RSA encryption take the same amount of time no matter what code path is used, and to make its memory access patterns uncorrelated with the keys (perhaps by using randomization during allocation). They should do this--the fact that their application leaks information has nothing to do with the processor it's running on; it's just that HT makes it particularly easy to measure that information. This would have a performance penalty, and I think the OpenBSD folks are too obsessed with performance, and that's why they've not done this. The performance obsession is a serious problem in the Unix world, and software systems in general.
If implementing openSSL effectively means adding special kernel support for things like constant-length timeslices or cache invalidation between context switches, that's fine. But this is not a bug in the kenel unless the kernel purports to enforce total separation between processes, which it certainly does not.
Comment removed (Score:4, Informative)
Re:Fixing is easier said than done (Score:2, Informative)
Re:Paper author just wants attention. (Score:1, Informative)
Re:Paper author just wants attention. (Score:3, Informative)
So even if you did not have any spyware running when you encrypted your secret files on where is the key to satellite that could melt the poles and flood all coastal cities, you would still be in trouble when 007 gets your laptop and searches throught your swap partition and find a suitable random looking bytes that luckily can decrypt everything, once more the free world is saved by Bond, James Bond....
Re:strcmp vulnerability. (Score:4, Informative)
The normal solution is to store both the the real password and the password supplied by the user in a memory area that is as large as the maximum password length and zero padded and do something like this:
Note that it is also vital thet the password supplied by the user is zero padded and the calculation time can not depend on how long that password is. If it did, it opens up to man in the middle attacks. Evil, huh?
Re:Great (Score:5, Informative)
Did you read a paper other than the one I read? I ran the exploit once, taking under one second, and I retrieved enough information to factor the RSA modulus N.
Re:He won't fix it? (Score:1, Informative)
Disabling hyperthreading would make it depend on a process switching back and forth, and no process could switch back and forth that quickly (it would be extremely inefficient, so no OS schedules them that way) - so effectively it's unexploitable without hyperthreading.
Basically the problem is that hyperthreading shares CPU resources between two threads at an extremely low level and thus makes the effects of one thread fairly easy to observe from another. The correct solution is to fix the scheduler to never schedule threads owned by different users on the same hyperthreading core.
Hyperthreading is probably going away in any case, as it's extremely inefficient. Multiple full cores (which don't have similar problems) are likely the direction of the future.
It's a genuinely exploitable problem, although it only affects multi-user machines where the users don't trust one another, which means that it isn't a problem for most users.
Re:At least Linus.... (Score:1, Informative)
Re:He won't fix it? (Score:2, Informative)
A possible solution (Score:5, Informative)
Here's another option: since this vulnerability depends on using the L1/L2 cache states to ferret out information, remove that from consideration. When processing an RSA (etc) key, have the code temporarily lock out context switches, AND have it take a fixed period of time to compute the result (or portion of the result), followed by flushing the cache. No data is left in the cache to analyze. The execution time of the code is fixed, so no dat leaks there. You don't have to make the algorithm a constant-time calculation if you can pad it at the end to the maximum (or close enough to the theoretical maximum that there's no true information leakage in practice). This helps avoid the potentially very slow algorithms that are truely constant-time. And flushing the cache at the end of each (portion of the) calculation removes that as a leak. (You need to flush it before waiting for the allocated time to elapse, note, and include max flush time in your calculation, and if possible factor into your calculation possible interrupt effects.)
A pain? yes. Requires extensive mods to crypto algorithm implementation? Yes (though perhaps not to the core calculation.) Requires OS support? Almost certainly. Requires HW support? Would be helpful but probably not required. Loss of performance? Yes, though far lower than disabling HT I imagine in normal cases (when not decrypting/encrypting).
Also, some of the restrictions above can probably be eased if the crypto algorithm is carefully designed and matched to the hardware.
Disclaimer: I am NOT a crypto geek! I have worked in processor and cache design, though.
Fixing is easier done than said: WBINVD (Score:4, Informative)
Re:probably easy to fix (Score:4, Informative)
Two reasons: First, I was busy informing vendors -- Intel, Microsoft, the *BSDs, and Linux vendors -- about the problem and explaining the possible solutions. Second, because fixing every application in the world (this doesn't only affect cryptography) would take far more than 3 months.
Re:He won't fix it? (Score:5, Informative)
Maybe the original poster was referring to the DEC-10 page fault password insecutity that was based on strcmp returing as soon as it encountered once wrong character, based roughly on the following idea:
1) Place Password with 1st/2nd character on a page boundry.
2) Clear cache
3) Call Password check.
4) If no page fault occurred, then the 1st char must be wrong, change it and goto 3
5) if page fault occurred, 1st character is correct (as 2nd char was checked), move password so 2/3 char is on page boundry and repeat.
In this way, you can reduce the attack by a huge amount, for a n length password the brute force attack needed goes down from 256^n to 256*n.
So, yes, attacks based on which character in strcmp fails have worked in the past, so it is valid to try and not make the same type of mistake again!
How to deal with timing attacks. (Score:1, Informative)
Random delays can be averaged out.
The best option is to make it take *constant* time for both success and failure.
The constant betrays no information, thus you will remove the information instead of making it less accurate or otherwise obfuscating it.
I believe that this same principle holds for pretty much all timing attacks.
Re:Bzzt! Back the paradigm up here. (Score:3, Informative)
In my experience, correctness is caught here, but security tends to slip through the cracks. It can be very hard to unit test for security (although, a well tested system tends to have fewer buffer overflows and whatnot).
Two problems. First, when you find the bug is irrelevant. It's the bug's value that matters. If your secure server is shipping tomorrow and you find a show-stopper buffer-overflow, you fix it. If that delays your ship date for a day or two, such is the price of developing software. Use a process that helps minimize the cost of change (I know I have a favorite [extremeprogramming.org]). If you don't, you're asking for trouble.
Part of these kinds of priorities is that you need to weigh the cost benefit of each change. You are suggesting that the size of the change has some kind of weight in this equation. My claim is that it does not. The customer doesn't care about how many lines you have to change, they care about if your software is safe, correct, secure and fast.
Using pithy excuses like, "I know it doesn't do what we said it would, but we only caught the bug 3 weeks before shipping," is an excellent way to show your customer how incompetant you are at software engineering and project management.
Far better to take the honest route, give them what they paid for, and not ship a bogus version. Unless, of course, your customer decides otherwise.
That's not what you said, and it's not what Linus is saying. Linus is saying, "It's not a problem. The bug is too hard to exploit, and I like hyperthreading." Maybe you've confused him with Alan Cox, who is saying that you can't ignore security bugs of this class, even if they're in hardware, and so he suggests that hyperthreading be turned off.