Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Software Linux

Hyper-Threading, Linus Torvalds vs. Colin Percival 396

OutsideIn writes "The recent Hyper-Threading vulnerability announcement has generated a fair amount of discussion since it was released. KernelTrap has an interesting article quoting Linux creator Linus Torvalds who recently compared the vulnerability to similar issues with early SMP and direct-mapped caches suggesting, "it doesn't seem all that worrying in real life." Colin Percival, who published a recent paper on the vulnerability, strongly disagreed with Linus' assessment saying, "it is at times like this that Linux really suffers from having a single dictator in charge; when Linus doesn't understand a problem, he won't fix it, even if all the cryptographers in the world are standing against him.""
This discussion has been archived. No new comments can be posted.

Hyper-Threading, Linus Torvalds vs. Colin Percival

Comments Filter:
  • He won't fix it? (Score:5, Insightful)

    by Morgahastu ( 522162 ) <bshel ... fave bands name> on Wednesday May 18, 2005 @08:18AM (#12565292) Journal
    Then somebody else will.
  • by lintux ( 125434 ) <slashdot AT wilmer DOT gaast DOT net> on Wednesday May 18, 2005 @08:20AM (#12565303) Homepage
    And how is that somebody else going to make Linus accept the patch?
  • by astro_ripper ( 884636 ) on Wednesday May 18, 2005 @08:23AM (#12565316) Homepage Journal
    Just because Linus doesn't think it's a major issue doesn't mean he won't accept a patch to fix it.
  • Dictator? (Score:5, Insightful)

    by BBrown ( 70466 ) on Wednesday May 18, 2005 @08:23AM (#12565319)
    A dictator who has made his domain open-source, thereby giving everybody free reign to change and make distinct copies of it?

    Come on.
  • Single Dictator? (Score:3, Insightful)

    by Anonymous Coward on Wednesday May 18, 2005 @08:26AM (#12565330)
    If Linus decides that he does not want to bump its priority up, someone else can fix it. Thats what fellow kernel developers do.

    If Microtoft decides not to fix it, then no one else can.

    so which one is a single dicatorship?
  • by Anonymous Coward on Wednesday May 18, 2005 @08:28AM (#12565347)
    The answer to Linus' assertion that "I'd be really surprised if somebody is actually able to get a real-world attack on a real-world pgp key usage or similar out of it" is not to say "Well we all think its bad", but to produce a proof-of-concept exploit.

    If he and "all the world's cryptographers" can't do that, then Linus' pragmatism beats the cryptographers paranoia (their defining quality, in my experience) into a cocked hat.

    If an exploit can't actually be exploited, it's not and exploit.
  • by Timesprout ( 579035 ) on Wednesday May 18, 2005 @08:29AM (#12565352)
    you found an obscure and difficult to exploit vulnerability. Now quit trying to make out the world is doomed and trolling on Linus to keep the spotlight on youself.
  • by Niekie ( 884742 ) on Wednesday May 18, 2005 @08:29AM (#12565354) Homepage
    And it might be best to start researching it as soon as possible before it will be massively exploited by someone who just found out how it works..
  • by tronicum ( 617382 ) * on Wednesday May 18, 2005 @08:30AM (#12565360)
    Just because Linus does not share the same opinion on the importance of this issue this does not mean that he is an dictator.

    Colin needs to cool down a bit. At least the Linux distros (SuSE, Red Hat, etc are the ones which will get a problem from affected systems) are going to get patches for it. From Linus or any other Kernel developer.

  • by CaymanIslandCarpedie ( 868408 ) on Wednesday May 18, 2005 @08:30AM (#12565364) Journal
    Oh come on man, don't be that guy ;-)

    So MS$ shouldn't fix problems in IE until an exploit has been shown for it?

    It's better to wait and see before fixing something that may not matter later.

    Its better to just fix it and be safe than wait and see if something happens later. It may not be top priority, but remember this "wait and see" approach to security is exactly what got MS$ into so much trouble with users. We don't need the same for Linux.
  • by ssj_195 ( 827847 ) on Wednesday May 18, 2005 @08:32AM (#12565374)
    If others were to "fix" it, they would be working on a copy, and they would be starting an entirely new fork altogether. I, for one, certainly don't want that to happen.
    I think pretty much every distro in existence maintains its own patchset for the vanilla kernel (effectively maintain their own "fork", I guess), and it is not causing any problems. In brief, if Linus does not accept a useful patch (or essential!), the distros will, and so this "forking", if you can really call it that, will have no negative effect on the end-user.
  • by Xpilot ( 117961 ) on Wednesday May 18, 2005 @08:34AM (#12565390) Homepage
    The kernel developers don't seem to agree on the right way to fix this, whether at the kernel level or in userspace [lkml.org]. However, it may affect the performance of the kernel if it's done in kernelspace, and it is impractical to have everyone rewrite their userland software, as someone else pointed out [lkml.org]. The "patch" which is available [freebsd.org] for FreeBSD to fix this problem only disables hyperthreading [lkml.org] and does not provide a real fix.
  • by A beautiful mind ( 821714 ) on Wednesday May 18, 2005 @08:37AM (#12565409)
    There is nothing to fix there, most of the coders agreed!

    Some people are just keep pushing their flawed agendas.

    Disclaimer: i did read the whole thread.
  • by MoonFog ( 586818 ) on Wednesday May 18, 2005 @08:37AM (#12565414)
    I wonder what people would say if this was about Microsoft and not Linux? You think Slashdot would talk about it in the same way?
  • by TuringTest ( 533084 ) on Wednesday May 18, 2005 @08:44AM (#12565451) Journal
    If this was about Microsoft and Bill refused to fix the vulnerability, nobody else could write a patch for the sources to solve the problem. See the difference?
  • by ysachlandil ( 220615 ) on Wednesday May 18, 2005 @08:46AM (#12565457)
    And how will they fix this?

    The only fix that kinda works is disabling hyperthreading. But on a dual core processor the problem is there as well (if there is a shared cache somewhere). And switching of the second core because of that would be stupid.

    The problem Colin points out (getting an RSA key) is a userland problem anyway, so there is nothing to fix for Linus... fixing cache eviction covert channels in the kernel is possible, but not without losing a lot of performance.

    --Blerik
  • by Raleel ( 30913 ) on Wednesday May 18, 2005 @08:50AM (#12565480)
    It's along the same lines of the "if all you got is a hammer" problem. If you've spent a lot of time working on something, it's obviously important to you. That doesn't mean that it's important to everyone else. This may well be a significant flaw from the crytographer's perspective, but then again, they study crypto a lot and have a vested interest in it.

    As someone pointed out, yay for linux being free. As one or two above pointed out, someone who does care with the knowledge will write a patch. It'll get implemented as an option in the code, and if shown to be unobtrusive enough, may even get turned on by default.
  • by ausoleil ( 322752 ) on Wednesday May 18, 2005 @08:58AM (#12565522) Homepage
    In layman's terms, this debate is:

    Scene: A wispy cloud scuds across the sunny blue sky. Not much happening, and the cloud is hardly even black.

    Chicken Little: The sky is falling! The Sky is falling!

    The Penguin DictatorNo, not really. It might fall, but it's very, very unlikely. So calm down!

    Chicken Little: I strongly disagree. The sky is falling! And because you do not understand the problem we're all going to die!

    The Penguin Dictator:Listen here. It's almost certainly not going to fall, and I need to worry about real problems!

    Chicken Little: (Runs screaming to the nearest coffeehouse with free wireless, where he types incessently:) The sky is falling! The Sky is falling! Tell Slashdot! Tell Tom's Hardware! Tell Cnet! Tell Linux Business News!

    The Penguin Dictator: Sigh. (And then he gets back to work. He looks up at the audience) They just do not get it, do they?

    The Windows Dark Lord: (Rubs hands together) Excellent, MOST excellent. (Yelling) Bring me my marketing minion!

    Marketing Minion: (being drug in by a bald guy yelling at him) Yes, O Master!?

    The Windows Dark Lord:Tell all the peasants that the sky is raining huge chunks of fire and dung! Tell everyone, tell them now! And have our independent consultants work on this day and night, night and day! Make sure that they independently tell everyone that they can easily avoid falminf chunks of sky dung if they stand behind our Windows! And RAISE the price!

    Some Guy At Some House In Some City Somewhere: "Wow, that was easy. Let me send this up to the Penguin Dictator. No sky ever fell, and that cloud is easily blown away. Nothing happening here, move along."

    The Penguin Dictator "Well that was easy. Include this patch in the next day's weather update!" Marketing Minion: Press Release!!! Millions killed by falling flaming sky chunks of burning dung with brain eating worms who eat children!!! Run for your lives!!!!

    Laura Didio, munching a do-nut"If you would hide behind Windows, the sky would stop falling! Your children would be safer and the world a better place." (looks at stoick ticker, says to self) 'Excellent. MOST excellent. Bring me a donut!'

    The Penguin Dictator "Sigh. Why didn't I just keep Sky 0.7a for myself? Why the bother, wy the bother?"

    EPILOGUE: No one was ever hurt by the piece of sky that never fell, and Chicken Little kept looking upward for another cloud to rant about.

    The End.

  • by mattgreen ( 701203 ) on Wednesday May 18, 2005 @08:58AM (#12565528)
    Nice ad hominem attack. Attack the argument, not the person.
  • Great (Score:2, Insightful)

    by Mensa Babe ( 675349 ) on Wednesday May 18, 2005 @09:00AM (#12565540) Homepage Journal
    "it doesn't seem all that worrying in real life."

    Yeah, just like a mouse driver having full access to kernel security structures and raw disk partitions, it doesn't seem all that worrying at all (except when your entire system crashes thanks to a buggy sound driver, or you get rooted, or...).

    Not fixing this design mistake while laughing at respected experts in the field reminds me something [google.com]. I was hoping that we as a community might have became a little bit more mature during the last decade, but I seem to have been naïve again.
  • by A beautiful mind ( 821714 ) on Wednesday May 18, 2005 @09:01AM (#12565549)
    It would be only ad hominem if his status would be in no relation to the issue at hand, but in this case, his "obsession" is important.
  • by untouchable ( 615727 ) <abyssperl&gmail,com> on Wednesday May 18, 2005 @09:02AM (#12565554) Journal
    I'm sorry. I don't know who modded me funny, but I wasn't actually trying to be funny.

    From the little bit I understand from the paper (which, when this story was first posted on Slashdot May 13th, wasn't publicly available), it's an extremely low level attack that depends on certain process switching back and forth, without emptying cache. (I think that's the base of it.)

    From what I've learned in software writing, is that it's preferrable to wait and see how much and how bad your software runs or has problems before you start charging into the situation to fix it. Especially something as low level as this, which could have unseen side effects. Especially since this (to me, at least) seems to be more of a hardware problem than software, per se. (But, of course, I could be wrong.)

    Its better to just fix it and be safe than wait and see if something happens later.

    I would think that would only be adviseable only if you (internally) found the bug/security problem. I would put up a notice saying I've heard of this situation, and maybe even come up with an idea for the fix, but definitely wouldn't implement it until I could prove or see proof that said problem exists.


    p.s. Microsoft's reaction is slightly different than what you describe. Microsoft didn't seem to care about bug fixes to IE, period, only fixing them when the griping got too loud and the public started paying more attention to Firefox. There was no motivation to fix IE, not just a 'wait and see' type attitude.

  • The man (Score:3, Insightful)

    by Rinisari ( 521266 ) on Wednesday May 18, 2005 @09:07AM (#12565600) Homepage Journal
    Linus seems to be intelligent enough to understand when to undertake certain tasks. Up to now, no one knew about the vulnerability. There hasn't been solid proof of exploit released in virus form yet. All this is, as of yet, is FUD. Linus doesn't want to reshape his priorities because of newfound FUD, and he's very smart in doing this.

    I'm sure that if an exploit is found, someone will have a patch ready for the next kernel revision - that's the beauty and advantage of Linux.
  • Re:Great (Score:3, Insightful)

    by Slashcrap ( 869349 ) on Wednesday May 18, 2005 @09:23AM (#12565726)
    Not fixing this design mistake while laughing at respected experts in the field reminds me something. I was hoping that we as a community might have became a little bit more mature during the last decade, but I seem to have been naïve again.

    I don't think Linus was "laughing at respected security experts" at all. For one thing this guy isn't a respected security expert - in my experience they usually have jobs. Secondly he wasn't laughing - his point was that the best way to fix the issue was to patch the userland crypto programs to be more immune to timing attacks e.g by making them read a set amount of cache lines each time. Anything that makes the crypto software more immune to these things seems good to me.

    Also, how do you patch the kernel to stop this attack? The FreeBSD patch just disables hyperthreading.

    I assume you can answer this since you sound like an expert. I also assume that you bothered to read the actual e-mails and not just the retarded Slashdot blurb.

    But now I'm being naive aren't I?
  • by Anonymous Coward on Wednesday May 18, 2005 @09:24AM (#12565735)
    How silly to make an OS decision based on those two reasons you wrote....use the right tool for the job, or you'll be (duh!) using the wrong tool. Mine are FreeBSD for 'net gateway, Linux for general-purpose work & Windows for gaming - I pay no attention to coolness factor (or the revese-snobbery counterpart) and go on technical strengths...
  • by jsonn ( 792303 ) on Wednesday May 18, 2005 @09:24AM (#12565739)
    Get the facts. Colin showed that you can retrieve ~30% of a RSA key by running a program in parallel. This can be improved most likely if you have the chance to do it more than once. It is also imported to keep in mind that you can't entirely avoid an unbalanced memory access pattern without also taking a huge performance penalty.

    The point of this debatte is that the Intel implementation sucks, it allows you to spy a lot on processes running on the virtual CPU. Sure, there are better alternatives than disabling HTT like the suggestions of Colin to only schedule threads of the same program on the virtual CPU. Actually, that is something you want to do anyway or otherwise you can seriously loose speed and drop under the performance of a processor with HTT disabled.

    Speaking of paranoia, it is often not a bad thing to have, many big security problems can be avoided. Oh, I forget to patch the Linux box next door.

  • by swillden ( 191260 ) * <shawn-ds@willden.org> on Wednesday May 18, 2005 @09:25AM (#12565741) Journal

    ... or in any other general-purpose operating system on a general-purpose computer. PCs are fundamentally insecure. There are a dozen ways to spy on cryptographic operations done in them, ranging from trojans, to hardware side-channel attacks, and dozens more to get copies of keys that they store. This is just one particular attack that may permit an attacker who can't get a trojan running with sufficient privileges to spy on operations directly to obtain some key bits. But if the attacker can't do that, there are lots of other ways to get the keys. General-purpose computers are simply not trustworthy.

    If security is important, you do your crypto in a secure crypto module, like the FIPS 140-2 Level 4 IBM 4758 [ibm.com] or the Level 3 Luna SA [safenet-inc.com]. Or, you use a general-purpose computer with special-purpose, very simple software and then provide strict physical access control to the machine and very limited network access -- often through a serial link using a custom protocol rather than via a real network. Or you could theoretically use a general-purpose machine with a TCPA chip with a regular, general-purpose operating system that has been modified to make use of the TCPA chip and with keys tightly bound to a well-defined system software configuration. But only if you have good physical security. In many situations it's still better to use a FIPS 140-2 Level 3 or Level 4 device.

    IMO, the existence of weaknesses like this in Linux, and the fact that they're widely known, is a *good* thing, because it helps convince people not to trust that which is inherently untrustworthy. We need more publicity of similar problems in Windows (and there are lots of them).

  • by Gopal.V ( 532678 ) on Wednesday May 18, 2005 @09:30AM (#12565785) Homepage Journal
    "I'd be really surprised if somebody is actually able to get a real-world attack on a real-world pgp key usage or similar out of it"

    Being able to read arbitrary memory from another process is a big security flaw, as illustrated by the Minesweeper Hacking [codeproject.com] sample. But for a kernel programmer it's a minor deal for server security as it needs a local/remote exploit to run code on your box. Even then it is a readonly exploit, which decreases exploitability unless we're talking about stuff like SSL certs or GPG keys - which are pretty hard to find in 1 Gb of data :)

    So to really exploit this, your thread should be running the CPU practically or 100% CPU. That should be an easy enough warning sign :)
  • by js3 ( 319268 ) on Wednesday May 18, 2005 @09:34AM (#12565815)
    when I first read your reply I thought it said
    Most dictators have tweaked the kernels to suit their needs anyway, nothing stops them from implementing this in their own kernel version.
  • Re:Great (Score:3, Insightful)

    by Anonymous Coward on Wednesday May 18, 2005 @09:36AM (#12565832)
    He ran the bloddy "exploit" well over 1000 time to retrieve 30% of an RSA key.

    Stop a mo and take a deep breath.

    Take a look at your (heavily patched I'm sure) machine. If you had an unsupervised 24 hours with that box and an unprivileged account how many other actually useful exploits could you run for key retrieval.

    I can think of about 5 methods right now that are much more likely to yield results within Linux. Even on a very up-to-date system with a decent admin.

    Pfft.

    - Storm in a teacup
  • by DrPizza ( 558687 ) on Wednesday May 18, 2005 @09:39AM (#12565865) Homepage
    The vulnerability is not Intel-specific. This kind of timing information can be leaked on any system that has data caches that are either too small to hold the entire lookup table an algorithm uses, or on an OS which multitasks, or use a lookup table with certain characteristics, or....

    The problem is not with hyperthreading. It's not with Linux. It's with the implementation of the encryption algorithms. They need to stop assuming that table lookups are constant-time. Because they're not. As good as constant-time for most purposes, yes. But for cryptography they're not good enough.
  • by antifoidulus ( 807088 ) on Wednesday May 18, 2005 @09:41AM (#12565884) Homepage Journal
    Heh, this is more than just your average buffer overflow exploit. This fix would have to modify how the OS handles the cache. It's going to probably take more than a quick fix to get rid of the exploit, and the patch could have far reaching reprecussions. All that to fix a security hole that may not even be exploitable in practice....
  • by julesh ( 229690 ) on Wednesday May 18, 2005 @10:08AM (#12566151)
    It isn't something Intel can fix, except by removing hyperthreading. The fact of the matter is that if you have two processes running simultaneously on the same processor, one of them can determine things about what the other is doing based on how many execution units of what type it seems to have access to, how long data remains in the cache, things like that. It's a fundamental problem that can only be fixed by the OS kernel denying use of hyperthreading to processes that need to be kept separate from one another.
  • by NighthawkFoo ( 16928 ) on Wednesday May 18, 2005 @10:10AM (#12566185)
    Hashing the passwords would probably help in this case, since then a single character change would completely alter the entire hash.
  • by stratjakt ( 596332 ) on Wednesday May 18, 2005 @10:11AM (#12566198) Journal
    I agree, why not have your crypto loops piss away a few extra cycles, randomly.

    for (i=0; isome_random_number; i++) { do_some_stuff_that_looks_like_crypto }

    This seems like another pointless nerd debate. It sounds like that "hack a key by examining the frequencies the CPU emits" type stuff.

  • by julesh ( 229690 ) on Wednesday May 18, 2005 @10:25AM (#12566336)
    That's good advice, but many people cannot afford to follow it. Specialist crypto hardware is too expensive for most users, and keeping a PC in a physically secure environment can be difficult and contradictory with other important considerations. There are many thousands of small internet-based businesses who simply cannot afford this level of protection, and must rely on "secure" servers rented from virtual server farms. Fixing problems like this quickly after they are discovered is important to help protect these people.
  • by Urkki ( 668283 ) on Wednesday May 18, 2005 @10:30AM (#12566395)
    • Unless I missed something, disabling HT _is_ a real fix.

    Sounds like it doesn't fix the problem, it replaces it with another (reduced performance). That's not a *real* fix to any problem.

    Compare: doing suicide to "fix" personal problems.
  • by julesh ( 229690 ) on Wednesday May 18, 2005 @10:32AM (#12566418)
    The best solution is to just fix the crypto libraries as a short term solution, and for Intel to fix the chip in future iterations as a long term solution.

    Fixing the crypto libraries is actually a non-trivial task. There are many of them and ensuring there is no information leakage is very difficult to achieve.

    Fixing the chip may well be impossible -- it sounds to me like the only way to prevent this kind of thing from being a problem is to give each thread its own instruction fetch pipeline, dedicated execution units and dedicated cache lines. Which is, in effect, dropping HT altogether and switching to dual cores instead.

  • Re:Linus and RMS (Score:1, Insightful)

    by Anonymous Coward on Wednesday May 18, 2005 @10:34AM (#12566443)
    See the story inside the latest OpenBSD CD sets....
  • by jrockway ( 229604 ) <jon-nospam@jrock.us> on Wednesday May 18, 2005 @10:40AM (#12566493) Homepage Journal
    His comment isn't from a CS perspective, it's from a code monkey perspective. CS people use mathematics to prove their code correct, application programmers write stuff and are happy it works.
  • Other examples (Score:3, Insightful)

    by John Harrison ( 223649 ) <johnharrison@@@gmail...com> on Wednesday May 18, 2005 @10:44AM (#12566540) Homepage Journal
    One need only to look at the smart card world to see all sorts of side-channel attacks that are harder to execute than this. First there was power analysis, then when countermeasures were implemented there was differential power analysis, then more countermeasures, now the use RF leakage, so there are countermeasures against that.

    If a process is leaking information somewhere, then there will be people clever enough to pull that information out.

    That said, it seems that this is more of a library problem than a Linux problem.

    If you need any real security you should be doing your private-key operations in an HSM anyhow, not on your CPU.

  • by null etc. ( 524767 ) on Wednesday May 18, 2005 @10:49AM (#12566582)
    By timing multiple runs you can get a decent estimate of how long time the strcmp function took, which means you can guess which character was first differing character in the password.

    Can I buy some pot from you?

    Maybe that would work with a ONE MEGAHERTZ PROCESSOR. But do you have any idea how fast processors are these days? And how likely any deviance in the cache state, IO controller state, page faults, multi-user latency, or power management will throw your precious timings right out the window?!?!?!

    I mean, c'mon, think about things before you say them. Even REAL TIME SYSTEMS AT NASA don't run with enough consistency to be able to tell WHICH CHARACTER IN A STRCMP OPERATIONS fails.

  • by Otto ( 17870 ) on Wednesday May 18, 2005 @10:52AM (#12566622) Homepage Journal
    Hence, this is an issue that effects me and my customers, and I seriously hope that a fix finds itself into either apache mod_ssl or the mainline Linux kernel PDQ.

    That's really what's up for debate here. Whether the patch should be in the kernel-land or in the code user-space (mod_ssl, for your example).

    The only realistic patch you could do in kernel-land is to simply disable HyperThreading. This works, but seems like a poor way to go. Any other form of patch in kernel-land just makes the attack harder and thus doesn't really work or it degrades performance way too much to be practical.

    But fixing it in userspace is somewhat easier to do, albeit you'd have to fix *every* user-space program that's susceptible to this sort of thing.

    Let's talk about the problem in general terms. When a program is doing some kind of computational stuff on something you want to remain secret, then it has to make some assumptions. Assumptions like the hardware is secure, or that it's not running on a virtual machine that's recording everything it does.. That sort of thing. You can come up with all kinds of ways to crack it like an egg if you work outside the box a bit and have total control of the machine it runs on.

    This problem is attacking one of those assumptions, namely that another process can't time the secret computations accurately enough to perform a timing attack. With HT, you have two things running on the same core, and so it is somewhat easier to do this sort of attack.

    So userspace programs that do secure computations have had one of their assumptions broken by HT. To remedy it, they need to rethink their assumptions. They need to or ensure that they perform equal timings regardless of the computations being done and so on. This is not particularly simple, but it's probably not particularly hard either.

    Of course, the attack is still largely theoretical. All it's been shown is that it's "possible", not that it's "easy" or even that it is indeed "doable". For one thing, without having some kind of clue as to the algorithim involved or some idea of what to look for, all you get are a bunch of timings. You still need to do some things to trigger it at the right time and in the right way as to be able to derive information from this channel.

    But crypto guys are paranoid like nobody else, and so they're naturally worried about this sort of thing. Mainly it's worrying to them because it's not a mathematical attack, which they're more used to. Modern crypto works based on theory and algorithims and such, and the idea that the algorithim being correct (for a given value of "correct") isn't enough to protect the security of the data is extremely worrying. A real world implementation of these algorithims now has to take some more real world facts into account, and this bothers them, of course.

    Linus is basically right here. The kernel is simply the wrong place to fix this. It doesn't ensure that processes cannot spy on other processes via subchannels like this, nor should it. If you're paranoid enough to think this is a real thing to guard against, then your secure code should take it into account. Existing code doesn't do that, and would need to be changed *even* if the kernel was patched. Because how do you know that your kernel has been patched? How do you know that you're not running on an HT processor? You can't know for sure, so you simply assume you are and take steps to make timing attacks fail. Because if you don't, you can't reasonably say that you've attempted to secure the code in this way.
  • by ajs318 ( 655362 ) <sd_resp2@earthsh ... .co.uk minus bsd> on Wednesday May 18, 2005 @10:53AM (#12566633)
    This isn't a kernel problem.

    It's either an application problem or a hardware problem. Basically, used memory is not being zeroed out, so one programme can look at what another programme left behind.

    In the case of a software frame buffer {like the 1980s home computers with bit-mapped graphics: Spectrum, BBC et al} failure to zero out memory causes spurious artifacts on the display. You can see this if you switch between graphics modes by writing to the hardware registers directly rather than using the "proper" system calls which clear the screen. {On the Beeb, you could actually grow a stack through the display memory. Pretty, but you'd better hope not to scroll the screen or print over that area.} The solution was implemented in software: create system calls that zero-out the display memory when switching graphics modes. {As a bonus, your users need only send one number to the routine, which can poke the right values into all the relevant registers on their behalf. They just don't get to invent their own graphics modes, but if you ever update the display ULA in future then at least you won't kill half the games software in existence.}

    What we're talking about here is cache memory not being zeroed out between uses by successive processes. That looks to me very much like a hardware problem. It's not even an easy problem. My guess is that the implementors looked at it, decided "It's potentially insecure in theory but bloody difficult to make use of in practice", and left it that way on purpose. Like there's no point fitting an expensive lock on a wooden door with a person-sized glass panel in the middle of it -- especially if that door is only accessible through an overgrown garden with an underfed Alsatian in it.

    BTW, crypto software running in userland could never, ever be made immune to snooping from kernel space -- at least, not on a system with any kind of debugging. The solution is to read and understand every bit of the kernel source -- including all drivers -- or get some independent expert to do so for you, so as to be sure the kernel contains nothing that could be used for malicious purposes. Hardware crypto devices would be more immune to tampering -- but less susceptible to independent verification.

    Imagine this: <CHEESY MEXICAN ACCENT>Hey, extranjero! You want to send secret message? I chave code so secret, nobody onderstand eet 'cep' for me an' my brother. Djou dictates to me, one word a time, I write eet down in secret code. Then I send eet to my brother and che go to your amigo, and read heem the secret code. Nobody in world onderstand 'cep' my brother.</CHEESY MEXICAN ACCENT>
  • by Jeppe Salvesen ( 101622 ) on Wednesday May 18, 2005 @11:26AM (#12566938)
    If Red Hat or SuSE or anyone else disagree with Linux, they can simply produce and apply a patch to their own kernels while releasing the patch itself to the public.

    This is one of the good aspects of open-source software: If you disagree, you can fork or simply distribute a patched product.
  • by poot_rootbeer ( 188613 ) on Wednesday May 18, 2005 @11:55AM (#12567267)
    If the server used HT, it would be possible for one of those other users to run an exploit on the server to crack my e-commerce site's private key.

    It may be possible, yes. But plausible?

    this is an issue that effects me and my customers

    It MAY affect you and your customers at some later time, but right now it doesn't.

    If you're THAT concerned about this issue, I assume you're going to call up your ISP and transition your site onto dedicated machines? Isn't it worth the extra cost to be assured that some other customer of the shared server environment can't compromise your crypto key?
  • by Anonymous Coward on Wednesday May 18, 2005 @11:55AM (#12567268)
    Except the attack would have to be executed by someone who has access to the system... and if they are on the system already, why would they waste time trying to hack your key using this obtuse method when they could just hack your files and db much easier... quite simply this vulnerability is highly overrated, as an attacker would have to be doing it just for the academics, ignoring other more convenient and already exploitable attack vectors.
  • by Drakonian ( 518722 ) on Wednesday May 18, 2005 @11:59AM (#12567304) Homepage
    There are serious problems with your endearing allegory.

    1. Microsoft is just as potentially vulnerable as Linux. Their dirty laundry just doesn't get aired in public.

    2. The fix is non-trivial and non-obvious. It's not a simple buffer overflow. Any patches are likely to have serious repercussions on kernel performance. e.g. disabled HT, ensure only two threads of the same U.I.D. are scheduled on the same processor, flush cache at every context switch, etc. It looks that Linus is unwilling to accept them unless this vulnerability moves more from the theoretical to proven.

  • by iabervon ( 1971 ) on Wednesday May 18, 2005 @12:25PM (#12567619) Homepage Journal
    Linus probably would do something about this if all the cryptographers in the whole world said it mattered. But, so far, Percival is the only person who seems to think it's actually a problem. Nothing on the subject from Bruce Schneier. And, while he says Linus should talk to the SELinux people, he probably doesn't realize that they have almost certainly heard about this and didn't comment in the thread.

    It wouldn't be hard to have an option to prevent processes with different owners from running on the same physical CPU at the same time. It wouldn't even affect the case that Linus mentioned. But cryptographers don't seem to think it's a plausible attack anyway, aside from carefully arranged conditions. The discussion was entirely over whether it would be less foolish to prevent it in the kernel or in userspace, and nobody seems to have argued that anything should be done at all.
  • by Paradox ( 13555 ) on Wednesday May 18, 2005 @12:48PM (#12567894) Homepage Journal
    From what I've learned in software writing, is that it's preferrable to wait and see how much and how bad your software runs or has problems before you start charging into the situation to fix it.
    Wait. Wait wait wait. Who taught you this? This isn't XP. This isn't sound software practicies. Maybe you're thinking of the infamous quote by Car Hoare, "Premature optimization is the root of all evil," perhaps?

    Potential performance problems are things you should defer on until proper profiling can be done (unless they're total show stoppers). Security and correctness are things you cannot ignore except in extreme cases. Security is particularly important to nail down, because it can result in your customers losing data (even data not pertaining to your app), which is the first no-no of software.

    Application software has four priorities, in this order:

    1. Safety (shouldn't destroy data)
    2. Correctness (do what it says it does)
    3. Security (don't do anything else)
    4. Performance (do it fast)
    YMMV, of course, sometimes correctness falls below security, and occasionally performance goes above correctness in some mathmatical functions (if doing it correctly would take a decade and doing a close approximation would take a day, obviously you want the approximation and then a heuristic).
    Especially something as low level as this, which could have unseen side effects. Especially since this (to me, at least) seems to be more of a hardware problem than software, per se. (But, of course, I could be wrong.)
    In this case, I'd say proper fix is to disable hyperthreading by default, and make sure the user is aware of the hardware bug/consequence of using HT when they decide to turn it on. You need to let the user decide if they're willing to accept the security risk or not.

    The Linux Kernel Developers may decide otherwise, but that's how I'd call it if it was in my shop. It's a hardware problem and the software fix is not obvious.

  • by Anonymous Coward on Wednesday May 18, 2005 @01:28PM (#12568397)
    > Actually, my bet is it will be fixed in the new CPU revision, by Intel

    No it will not. This CANNOT be fixed. It has nothing to do with privilege escalation, and everything to do with information "leakage" from observable effects. Intel can no more change these observable effects than they can make a CPU that can't be instrumented with performance counters.

    And guess what: linus is right. Linux is not the place to address such issues, because this is only ONE such place where such leakage can occur. Linux is not a military-grade computing system, and if you need such security, you need a dedicated system that NEVER pre-empts a higher-security process to allow a lower-security process to run. The attack is merely harder, not impossible on truly multi-CPU machines.
  • Specialist crypto hardware is too expensive for most users

    If your secrets are worth enough that an attacker would go to the effort required to use this exploit... no, it is not too expensive. An IBM 4758 costs just over $1000 and provides an extremely high level of security.

    If you want to go really cheap, and don't need high performance, you can also use a $10 smart card with a $10 smart card reader. The result is less secure than with a 4758, or a Luna SA, or any of the other products on the market, in several ways, but it's much, much better than doing the crypto in main memory on a GP CPU.

    and keeping a PC in a physically secure environment can be difficult and contradictory with other important considerations.

    Again, if you need security you have to do what it takes to get security. If security requirements contradict other requirements, you have a problem to solve.

    At bottom, the point is that this is a sophisticated attack. There are much easier ways to dig keys out of a GP computer. If you need to defend against this attack, then you also need to defend against all of those, and defending against all of those requires physical security and use of secure crypto hardware. By the time you've closed all the other avenues of attack, you've closed this one, too.

    This problem is real, and it's a good idea to fix it, on principle. But it's not uncommon that real security holes are simply irrelevant because of other considerations.

    This is analogous to someone pointing out that my house is insecure because the attic vents are made of lightweight sheet metal. An attacker can use a ladder to climb up, use a crowbar to rip the vent off, climb into my attic, find the access panel and drop down into my house. That's all very true. However, compared to all of the *other* ways there are to break into my house, it's simply irrelevant. My house is simply not an appropriate place to store the Hope diamond, and replacing the attic vents with heavy steel ones, or otherwise securing access to the attic, isn't going to change that.

  • by wirelessbuzzers ( 552513 ) on Wednesday May 18, 2005 @04:45PM (#12570772)
    Yep, and like the old maxim goes, there's no security if the attacker has physical access. Alot of this collegiate discussion of measuring voltages and doing fine-grained timing would require having physical access.

    No. It requires having shell access. This is not the same as physical access, and a system should be secure against an attacker with shell access. Not that most are in fact secure, but they should be.
  • by iggymanz ( 596061 ) on Wednesday May 18, 2005 @11:16PM (#12573933)
    Hello Colin! How many of the world's servers really have local users that aren't already trusted with potential to access the business data? And for that matter, what percentage already have *physical* access to the machine? How many easier and more convenient ways do they have to snoop/steal/alter information than the hyperthread exploit? Heck, I'm an honest person and I can think of dozens of ways, I wonder what a creative sysadmin or dba turned evil person could conceive? As a general principle, potentially hostile users should NEVER be given local access to a server with information needing security, it's that simple. And failing to keep external users from getting local privileges will open the door to all manner of data snooping/destruction/theft whether or not a hyperthreaded processor is involved.

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...