Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Encryption Security Bug IT Linux

Linux Servers' Entropy Pool Too Shallow, Compromising Security 111

The BBC reports that Black Hat presenters Bruce Potter and Sasha Woods described at this year's Black Hat Briefings a security flaw in Linux servers: too few events are feeding the entropy pool from which random numbers are drawn, which leaves the systems "more susceptible to well-known attacks." Unfortunately, [Potter] said, the entropy of the data streams on Linux servers was often very low because the machines were not generating enough raw information for them. Also, he said, server security software did little to check whether a data stream had high or low entropy. These pools often ran dry leaving encryption systems struggling to get good seeds for their random number generators, said Mr Potter. This might meant they were easier to guess and more susceptible to a brute force attack because seeds for new numbers were generated far less regularly than was recommended. Update: 08/10 01:05 GMT by T : Please note that Sasha Woods' name was mis-reported as Sasha Moore; that's now been changed in the text above.
This discussion has been archived. No new comments can be posted.

Linux Servers' Entropy Pool Too Shallow, Compromising Security

Comments Filter:
  • Random (Score:5, Funny)

    by Anonymous Coward on Sunday August 09, 2015 @08:31AM (#50279171)

    So a random number walks into a bar. The barman says, "I was expecting you"

  • by Anonymous Coward on Sunday August 09, 2015 @08:34AM (#50279177)

    Server rooms could have cameras filming cats to generate more entropy from.

  • That's why i use
    http://www.issihosts.com/haveg... [issihosts.com]
    Wouldn't it be also beneficial to enlarge the pool size from the current 1024?
    • Re: (Score:2, Informative)

      by Anonymous Coward

      I use https://github.com/waywardgeek... [github.com]

      Cheap, reasonable bitrate hardware TRNG, for adding entropy. Entirely open source (code and hardware) with plenty of documentation about how it works.

  • Time to go pro [youtube.com].

  • have something useful to contribute. And not mess up whole eco-systems. ;-)

  • Virtualization (Score:5, Interesting)

    by DarkOx ( 621550 ) on Sunday August 09, 2015 @09:00AM (#50279267) Journal

    How much of this problem is due to old assumptions about running on real hardware? That is to say the entropy pool is fed from lots of sources most of them system devices. Is this just an unintended consequence of running on cut down virtual hardware platforms?

    • Virtualization is a strong candidate because everything can be so samey but it can happen on real hardware too - imagine a trying to generate randomness on a basic MIPS based home router [lwn.net] with flash based disks, no hardware RNG, typically booting from a fixed extract RAM disk install and doesn't have hardware clock to save time when powered off but makes ssh certs early during its first boot...

    • Re:Virtualization (Score:4, Informative)

      by swillden ( 191260 ) <shawn-ds@willden.org> on Sunday August 09, 2015 @01:11PM (#50280213) Journal

      How much of this problem is due to old assumptions about running on real hardware? That is to say the entropy pool is fed from lots of sources most of them system devices. Is this just an unintended consequence of running on cut down virtual hardware platforms?

      The researchers specifically addressed virtualization in the talk, providing different measurements of entropy available on real vs virtual machines. Real machines generate roughly twice as much entropy per unit of time, but both generate 2-3 orders of magnitude less than systems consume. However, as I noted in my other lengthy post, it's not clear that this really matters.

      • How much of this problem is due to old assumptions about running on real hardware? That is to say the entropy pool is fed from lots of sources most of them system devices. Is this just an unintended consequence of running on cut down virtual hardware platforms?

        The researchers specifically addressed virtualization in the talk, providing different measurements of entropy available on real vs virtual machines. Real machines generate roughly twice as much entropy per unit of time, but both generate 2-3 orders of magnitude less than systems consume. However, as I noted in my other lengthy post, it's not clear that this really matters.

        How significant are these corner cases? It seems to me that the problem is critical only after a reboot. The warning is for some software that starts on reboot and that requires some random bytes. Can that software be made to wait two or three minutes to allow entropy to build up? (Systemd allows deferring the autostart of a program by some amount of time after a reboot. SystemD can be used as well in the Virtual system that is restarted at reboot time).

        All I understand from the notice is that programm

        • Can that software be made to wait two or three minutes to allow entropy to build up?

          Even better, the software can be made to watch the entropy pool (there's an ioctl for that) and wait until there's enough. And it can also be made to re-seed periodically while running in order to limit any vulnerability arising from bad estimates early.

  • by Sits ( 117492 ) on Sunday August 09, 2015 @09:51AM (#50279413) Homepage Journal

    For the interested: Understanding-And-Managing-Entropy-Usage Whitepaper Black Hat whitepaper [blackhat.com].

    So it seems this is the classic problem that (Linux) programmers are told to use /dev/urandom (which never blocks) and some programs are doing so at system startup thus there's the opportunity for there to be "insufficient" randomness because not enough entropy has been gathered at that point [stackoverflow.com] in time. In short: using /dev/urandom is OK but if you are using it for security purposes you should only do it after /dev/random would have stopped blocking for a given amount of data for the first time since system startup (but there's no easy way to determine this on Linux). Or is there? Since the v3.17 kernel there is the getrandom [man7.org] syscall which has the beahviour that if /dev/urandom has never been "initialised" it will block (or can be made to fail right away by using flags). More about the introduction of the Linux getrandom syscall can be read on the always good LWN [lwn.net]. And yes the BSD's had defences against this type situation first :-)

    So this is bad for Linux systems that make security related "things" that depend on randomness early in startup but there may be mild mitigations in real life. If the security material is regenerated at a later point after boot there may be enough entropy around. If the the system is rebooted but preserves entropy from the last boot this may be mitigated for random material generated in subsequent boots (so long as the material was generated after the randomness was reseeded). If entropy preservation never takes place then regeneration won't help early boot programs. If the material based on the randomness is never regenerated then again this doesn't help. If you take a VM image and the entropy seed isn't reset then you've stymied yourself as the system believe it has entropy that it really doesn't.

  • Myths about urandom (Score:5, Informative)

    by hankwang ( 413283 ) on Sunday August 09, 2015 @10:07AM (#50279461) Homepage

    This article, Myths about urandom [2uo.de], explains why it's generally silly to worry about dried-up entropy pools. There are two scenarios where this might be an issue:

    1. There is a compromise that allows an attacker to calculate the internal state of the PRNG.
    1a. That could be because the PRNG is leaking information about its internal state through its output. That would be really bad, but there are no known or suspected attacks.
    1b. The server is compromised in some other way. Then it wouldn't matter whether it's /dev/urandom or /dev/random; you are hosed anyway.

    2. There is no 'true' entropy at all, which could happen on a server which does not store its internal state between reboots and which does not manage to gather 512 bits of true entropy-generating interactions between boot time and the first use of /dev/urandom. This would be an issue only in very specific use cases, certainly not as generic as TFA suggests.

    • This isn't so much about entropy "drying up" a few days after the system has booted - this is more about generating random numbers just after a system has booted and before "enough" entropy was gathered in the first place. From your link:

      Not everything is perfect
      [...]
      Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.

      but also from your link

      FreeBSD does the right thing[...]. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again.
      [...]
      On Linux it isn't too bad, because Linux distributions save some random numbers when booting up the system (but after they have gathered some entropy, since the startup script doesn't run immediately after switching on the machine) into a seed file that is read next time the machine is booting.
      [...]
      And it doesn't help you the very first time a machine is running, but the Linux distributions usually do the same saving into a seed file when running the installer. So that's mostly okay.
      [...]
      Virtual machines are the other problem. Because people like to clone them, or rewind them to a previously saved check point, this seed file doesn't help you.

      So not great but not (always) a disaster and modern Linux allows programs to counter this if they wish by using getrandom [slashdot.org].

      • by KGIII ( 973947 )

        Why, in the real world, is this so important? I am obviously missing something. Hopefully someone will hit me with a clue stick. I fail to see why this desire for as close to truly random as one can get is actually a valuable thing for, well, pretty much anything. Given the frequency of these posts (with seemingly good replies) I am obviously not getting something.

        • by Fweeky ( 41046 )

          Because a lot of security boils down to "I'm thinking of a number between 0 and $something, I bet an attacker can't guess it at a rate better than blind chance".

          e.g. a 128 bit encryption key is a number between 0 and 340282366920938463463374607431768211455. With a secure random number generator, an attacker will have to on average test half of those possible keys before he finds the correct one, because he can't know anything that will reduce the space he has to search.

          If your random number generator is br

          • by KGIII ( 973947 )

            That makes sense. I knew I had to be missing something. I am not a crypto guy or even a systems hardening guy. Whilst I am a maths geek, I really do not do anything with crypto. Thanks.

    • Thanks for posting the link. It's so good, IMO, that I had it in my bookmarks. It points out that the entropy is fed to the CSPRNG.

      Disclaimer: I'm not a crypto guy, but I do read a fair amount about it.

      One CSPRNG algorithm is to use a CBC cipher in "counting mode". That is, run 1, 2, 3, ... through the CBC and use the output of that. You start with a random key. But, the security of the CSPRNG is based on breaking the CBC (e.g. finding the key) from the output of the CSPRNG/CBC that is the "random" th

  • by Greyfox ( 87712 ) on Sunday August 09, 2015 @10:20AM (#50279519) Homepage Journal
    There's just no more entropy, man! Entropy isn't what it used to be!

    But I have a solution! A good solution! A GREAT solution! Behold! Yes, a banana! As we all know, bananas are radioactive! So all we need to do is attach a particle detector to our computer and put a bunch of bananas right on top! Boom! Bananarand! You'll just need to remember to change your bananas out every so often as their half-life is very short. After about a week your bananas will decay into fruit fly particles (I'm not a nuclear scientist, I just play one on TV.)

    All right fine, if you don't want to use a banana, United Nuclear has some lovely uranium samples for sale at the moment. Pretty sure you get on a list if you actually order one. Possibly if you click on that link. The radioactive Fiestaware they're selling would probably also work. While you're there, check out their selection of EXTREMELY DANGEROUS MAGNETS!

  • This is very true now that everything goes towards virtualization and clouds. Systems in the cloud are especially susceptible to low entropy issues.
    Companies like Amazon and Google should step up and provide true hardware entropy sources for systems that they host.
    And it's a known problem because there was chatter about it for years yet no-one stepped up.
    Hopefully exposure at Black Hat will finally get them to do something about it.

  • by swillden ( 191260 ) <shawn-ds@willden.org> on Sunday August 09, 2015 @11:10AM (#50279675) Journal

    I attended the (very interesting) Black Hat talk, and neither the article nor the /. summary do a very good job of summarizing it.

    From memory (I didn't take notes), the key points were:

    1. Tracking the entropy pool is a little harder than you might expect, because the kernel itself uses quite a lot. The primary reason is ASLR, but there other ways the kernel uses entropy itself. The kernel is effectively drawing from /dev/urandom at a very high rate, pulling thousands of bits every time it starts a process.

    2. /dev/urandom vs /dev/random work a little differently than most people expect. There are actually three entropy pools, a "main" pool, a /dev/random pool and a /dev/urandom pool. Both /dev/random and /dev/urandom use the same PRNG, and both try to maintain 0 bits of entropy in their local pools, drawing a block from the main pool when needed and mixing it into their local pools (keep in mind that a pool is many bytes of data plus an estimate of how much entropy is in that data). /dev/random, obviously, blocks when it runs out of entropy in its local pool and there isn't enough in the main pool to satisfy the request. /dev/urandom works the same way, except that (a) it won't block and (b) it won't draw the main pool below 128 bits. When the main pool drops to 128 bits, /dev/urandom stops pulling from it.

    3. The rate of entropy going into the main pool is low, on the order of single-digit bits per second. For some reason Potter and Moore didn't understand, using a daemon to get bits from the CPU HWRNG not only didn't increase the estimated entropy inflow rate, but actually decreased it (I had to step out for a bit around that point in the talk so I missed details).

    4. Points 1, 2, and 3 taken together mean that the entropy pool is basically never above 128 bits. The kernel is always drawing on /dev/urandom, typically at a much higher rate (hundreds to thousands of bits per second pulled from urandom vs <10 bits per second going in).

    5. OpenSSL seeds its internal CPRNG during startup and then just uses it, potentially forever, without ever reseeding. Worse, when it seeds from /dev/urandom at startup it makes no effort to check whether or not the kernel pool has any entropy to give it. It just gets some bytes and goes. This means that if an apache server starts up when there isn't much entropy in the pool, that very small amount of entropy could get stretched over a very large number of cryptographic operations. (Aside: I do a little work on and with BoringSSL, Google's OpenSSL fork, and it does reseed regularly, and also uses a more trustworthy CPRNG. I highly recommend using BoringSSL rather than OpenSSL.)

    6. Android actually does better than desktop/server Linux, producing many more bits per second of entropy, though still far less than are demanded. Potter attributes this to the rich source of randomness available from the cellular radios.

    How much any of this matters is hard to say. Entropy estimation is something of a black art at best, or wild ass guess at worst. Also, the kernel is known to be extremely conservative with at least one of its entropy sources: the HW RNG in most CPUs. Because there's concern the NSA may have backdoored those HW RNGs the kernel assumes their output is perfectly predictable, meaning that it provides zero entropy. The kernel mixes in HW bits anyway, because they can't hurt even if they are 100% predictable, and to whatever extent they're unpredictable they help.

    In addition, the whole concept of entropy pools of known size which are drawn down and refilled is an extremely conservative one. Modern CPRNGs are able to produce enormous amounts of uniformly-distributed, unpredictable bits from fairly small seeds -- say, 256 bits of entropy. Assuming the kernel ever manages to get even a

    • 5. OpenSSL seeds its internal CPRNG during startup and then just uses it, potentially forever, without ever reseeding. Worse, when it seeds from /dev/urandom at startup it makes no effort to check whether or not the kernel pool has any entropy to give it. It just gets some bytes and goes.

      That would only be a problem if the boot scripts start up OpenSSL before seeding urandom. Are there any server distros that do that? At least CentOS 6 does it in rc.sysinit, way before the network-related stuff is started

      • 5. OpenSSL seeds its internal CPRNG during startup and then just uses it, potentially forever, without ever reseeding. Worse, when it seeds from /dev/urandom at startup it makes no effort to check whether or not the kernel pool has any entropy to give it. It just gets some bytes and goes.

        That would only be a problem if the boot scripts start up OpenSSL before seeding urandom. Are there any server distros that do that? At least CentOS 6 does it in rc.sysinit, way before the network-related stuff is started.

        That only helps if there's some stored entropy to load during boot. There generally is something to load... how much entropy it has is harder to say.

        • That [restoring random seeds] only helps if there's some stored entropy to load during boot. There generally is something to load... how much entropy it has is harder to say.

          Well, I'd say that managing entropy for the PRNG is mainly relevant for network-connected servers that need to generate encryption keys all the time. I assume that the timing of incoming network packets counts towards the entropy pool.(*) There should be plenty of network packets to ensure the accumulation of at least a kilobit of entro

          • There should be plenty of network packets to ensure the accumulation of at least a kilobit of entropy during the normal course of operation.

            Sure, eventually. But what happens later doesn't matter if Apache has already fired up its workers and they've already initialized their OpenSSL CPRNGs. This is why OpenSSL should re-seed periodically -- and why BoringSSL does re-seed periodically. So if you start with the CPRNG in a predictable state, at least it doesn't stay that way.

            I can't find any list online of linux entropy sources that are used.

            This is one of the other points the researchers noted: There's very little information available on how it all works other than the source code. Of course, the source code is

    • Re: (Score:3, Informative)

      by Anonymous Coward

      Basically there's no proof /dev/urandom is less safe than /dev/random - there have been no published results concerning it, and as it internally mixes in data from SHA-1 of entropy coming from random events, and known attacks against SHA-1 aren't very good [wikipedia.org], and that's before taking into consideration that we're mixing those hashes in well. There's a pretty good reason all the attacks come from lack of entropy, but as soon as you get a bit - you're set.

      That said, if you have a source of random information yo

      • Basically there's no proof /dev/urandom is less safe than /dev/random

        Yep. As I said, the whole pool entropy measurement concept is extremely conservative in that there are no known theoretical attacks, much less practical ones, that depend on drawing from /dev/urandom when it has low entropy.

        Also, there's an ioctl you can use [man7.org] on /dev/random - RNDGETENTCNT - to see how much entropy your system currently has, if you want to play around with it a bit.

        Or you can cat /proc/sys/kernel/random/entropy_avail

        Note that doing that will start a new process, which consumes entropy, so don't be surprised if it gradually drops over repeated calls.

      • by sinij ( 911942 )
        This is not how it works. You take 1...2...3...n sequence, pass it through SHA-1 and you will get a very predictable set of random-looking numbers that will pass every test. As such "because SHA-1" argument is absurd.
    • by Kjella ( 173770 )

      Also, the kernel is known to be extremely conservative with at least one of its entropy sources: the HW RNG in most CPUs. Because there's concern the NSA may have backdoored those HW RNGs the kernel assumes their output is perfectly predictable, meaning that it provides zero entropy.

      So unless NSA has built in a backdoor and it's NSA trying to hack you, the CPU has plenty entropy. I'm glad they're trying to make it NSA-proof as well, but I would think any security is fucked if you can't trust the hardware to execute the instructions it gets.

      • It's not about not trusting the hardware to execute the instructions. If the NSA were to backdoor the RNGs, they wouldn't do it by making the RNGs produce bad output on command... that would be too easy to detect. Instead, they'd just reduce the entropy of the output, all the time, but not so much that it's detectable. For example, perhaps for every 128-bit output, the RNG generates one of 2^40 128-bit blocks. Now, if you use that RNG's output as an AES key, say, then your key is easily brute-forceable by

    • by sinij ( 911942 )
      Entropy estimation isn't a black art. Here is how you do it: take SP 800-90B recommended tests - Markov, Compression, Frequency, Collision, and Partial Collection, collect 1 million samples, then take minimum of the individual test results. Or just do Markov. As long as you take care to analyze raw data (not conditioned), you can find our what is your entropy.
      • Entropy estimation isn't a black art. Here is how you do it: take SP 800-90B recommended tests - Markov, Compression, Frequency, Collision, and Partial Collection, collect 1 million samples, then take minimum of the individual test results. Or just do Markov. As long as you take care to analyze raw data (not conditioned), you can find our what is your entropy.

        That doesn't tell you anything about the entropy of the data. At all. Try this: Take your favorite CPRNG and seed it with all zeros. Then generate all of the samples you need for exhaustive testing and run all of the tests. The completely deterministic output will pass with flying colors.

        While you're at it, try this test as well: Take the output of a known quantum random process (thermal noise, etc.). Don't filter or process it in any way, use the raw data. Now run the tests on it. The truly random dat

  • Adding entropy generation hardware on a motherboard or even in the CPU would be trivial.

    Would it be cryptographically sound ? Probably not, but it's a hell of a lot better than the ad hoc system now in place.

    Is there some sort of patent issue preventing this from happening or something ?

  • Servers process enormous amounts of data that could increase the entropy pool. Although they can be manipulated to some extent, the timing and content of network packets that reach them can hardly be predicted.

  • by JustAnotherOldGuy ( 4145623 ) on Sunday August 09, 2015 @12:57PM (#50280127) Journal
    Why don't they just use the RNG built into systemd??
  • by sinij ( 911942 ) on Sunday August 09, 2015 @03:48PM (#50280947)
    I do this for living. This presentation is FUD and not applicable for 99% of all configurations. Sure, some headless system with a solid state drive will encounter 'at rest' issues if they idle long enough. This why /dev/random design blocks. For that 1% cases you can always mix Intel RdRand, or Freescale SEC sources.

    The real issue with Entropy is that developers keep using /dev/urandom, then all bets are off, as you need to guarantee that system always has sufficient entropy.
    • I do this for living. This presentation is FUD and not applicable for 99% of all configurations. Sure, some headless system with a solid state drive will encounter 'at rest' issues if they idle long enough. This why /dev/random design blocks. For that 1% cases you can always mix Intel RdRand, or Freescale SEC sources. The real issue with Entropy is that developers keep using /dev/urandom, then all bets are off, as you need to guarantee that system always has sufficient entropy.

      Your comment inadvertently highlights the problem.

      Developers don't use /dev/random because it blocks, and there's almost never any point to waiting. Once the pool has accumulated a few hundred bits of entropy, there's no reason not to use /dev/urandom.

      But blindly using /dev/urandom is dangerous because in some (fairly rare) cases the system will not have any entropy.

      The solution is to use /dev/urandom, but only after verifying that the pool has some entropy. Ideally, it would be nice to have an API th

      • by Sits ( 117492 )

        The solution is to use /dev/urandom, but only after verifying that the pool has some entropy. Ideally, it would be nice to have an API that allows you to find out how many total bits of entropy have been gathered by the system, regardless of how many remain in the pool at any given point in time. If the system has managed to accumulate a few hundred bits, just use /dev/urandom and get on with life. If it hasn't, use /dev/random and block.

        The solution is to use /dev/urandom, but only after verifying that the pool has some entropy. Ideally, it would be nice to have an API that allows you to find out how many total bits of entropy have been gathered by the system, regardless of how many remain in the pool at any given point in time. If the system has managed to accumulate a few hundred bits, just use /dev/urandom and get on with life. If it hasn't, use /dev/random and block.

        You could build what you are asking for by using the new (since v3.17 kernel) getrandom() [lwn.net] syscall. See the part about emulating getentropy for determining if you've ever had up to 256 bits entropy [man7.org] in its man page for implementing your API suggestion...

        • Not really. getrandom() doesn't tell you how much entropy is in the pool, or if you've ever had a certain amount of entropy. It does allow you to wait until the pool has been "initialized", which is good, and maybe good enough, but it's not the same.
          • by Sits ( 117492 )

            Doesn't it let you essentially let you find out if you've had (since this boot?) up to 256 bits of entropy? You can ask it whether it has had an amount so long as it's less than 256 bits and you can force it to return failure if you ask for an amount it hasn't yet reached. It's not as generic as what you're asking ("tell me how much you've ever had") but it does still sound close to what you're asking for (albeit in a limited 256 bit form).

            • Doesn't it let you essentially let you find out if you've had (since this boot?) up to 256 bits of entropy?

              No, not in any way I can see.

              You can ask it whether it has had an amount so long as it's less than 256 bits and you can force it to return failure if you ask for an amount it hasn't yet reached.

              That's not what it says. It says:

              By default, getrandom() draws entropy from the /dev/urandom pool. [...] If the /dev/urandom pool has been initialized, reads of up to 256 bytes will always return as many bytes as requested and will not be interrupted by signals

              So, it's good that if the call will block unless the pool has been initialized (unless you give it the GRND_NONBLOCK) flag. But "pool has been initialized" doesn't tell you anything about how much entropy has been gathered except that it's non-zero. The bit about 256 byte (note: byte, not bit) reads always returning has nothing to do with entropy available, it's just an assurance that reads up to that size will be fast and atomic with respect to

              • by Sits ( 117492 )

                You're absolutely right - it only lets you tell whether the pool has been initialised and GRND_NONBLOCK on tells you how much /dev/random could return if it were blocking and nothing about the total amount of entropy gathered.

  • It's not hard, all you need is a few thermal diodes and read from them.

    I seriously can't believe modern hardware does not have hardware RNGs

  • So what ever happened to "save the random state at shutdown, and restore the random state at startup"? That, I thought, was standard behavior as of about a decade ago.

    If that wasn't enough, then every so often (no more than once a day or so), grab 1K of random bytes from random.org, and add it to the pool at a slow and steady rate (when /dev/random would block, for example); refill that buffer when empty if enough time has passed.

    Yes, that basically means 1K bytes, or 8K bits, of entropy only for /dev/rando

  • Imagine a hypervisor spinning up 120 identical Linux instances - they all generate ssh-keys at the same time during boot/install (+- ms'). How can that be "good"? It will probably survive statistical testing but so a LCG. I have serious doubt that there is enough entropy in ze clouds. Btw: coolest way to harvest entropy is to feed your PRNG with SDR-input.

Let's organize this thing and take all the fun out of it.

Working...