Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Encryption Math Linux

Linux RNG May Be Insecure After All 240

Okian Warrior writes "As a followup to Linus's opinion about people skeptical of the Linux random number generator, a new paper analyzes the robustness of /dev/urandom and /dev/random . From the paper: 'From a practical side, we also give a precise assessment of the security of the two Linux PRNGs, /dev/random and /dev/urandom. In particular, we show several attacks proving that these PRNGs are not robust according to our definition, and do not accumulate entropy properly. These attacks are due to the vulnerabilities of the entropy estimator and the internal mixing function of the Linux PRNGs. These attacks against the Linux PRNG show that it does not satisfy the "robustness" notion of security, but it remains unclear if these attacks lead to actual exploitable vulnerabilities in practice.'" Of course, you might not even be able to trust hardware RNGs. Rather than simply proving that the Linux PRNGs are not robust thanks to their run-time entropy estimator, the authors provide a new property for proving the robustness of the entropy accumulation stage of a PRNG, and offer an alternative PRNG model and proof that is both robust and more efficient than the current Linux PRNGs.
This discussion has been archived. No new comments can be posted.

Linux RNG May Be Insecure After All

Comments Filter:
  • Linus always says Linux is perfect. Linus can be wrong.

    • by raymorris ( 2726007 ) on Monday October 14, 2013 @11:26PM (#45128877) Journal

      Linus signs off on many changes everyday. He does expect you to read the code before trying to change it. That was the problem before - someone put up a change.org petition that made clear they had no idea how it worked.

      • I just read TFA and associated paper, and the petition. Linus was right, the petition was pointless, and motivated by confusion on the petitioner's part. However, the paper points out some scary issues in the Linux PRNG. It's tin-foil hat stuff, but it shows how one user on a Linux system could write a malicious program that would drain the entropy pools, and then feed the entropy pool non-random data which Linux would estimate as very random. If this attack were done just before a user uses /dev/random

    • by WaywardGeek ( 1480513 ) on Tuesday October 15, 2013 @12:17AM (#45129061) Journal

      No, RNGs are easy. Super easy. Just take a trustworthy source of noise, such as zener diode noise, and accumulate it with XOR operations. I built a 1/2 megabyte/second RNG that exposed a flaw in the Diehard RNG test suite. All it took was a 40 MHz 8-bit A/D conversion of amplified zener noise XORed into an 8-bit circular shift register . The Diehard tests took 10 megabytes and said if it found a problem. My data passed several times, so I ran it thousands of times, and found one test sometimes failed on my RNG data. Turns out the Diehard tests had a bug in that code. Sometimes the problem turns out to be in the test, not the hardware.

      • by Anonymous Coward on Tuesday October 15, 2013 @12:49AM (#45129219)

        No, RNGs are easy. Super easy. Just take a trustworthy source of noise

        Therein lies the tricky part. Getting a trustworthy source of noise is harder than you may think. Especially when you're writing software with no control over the hardware it runs on.

        • by Vintermann ( 400722 ) on Tuesday October 15, 2013 @02:58AM (#45129635) Homepage

          The nice thing about randomness though, is that it adds up. If you xor one stream of hopefully random bits with another stream of hopefully random bits, you get a result that is at least as random as the best of the two streams, quite possibly better than either. It's a rare and precious thing in cryptography: something you can't make worse by messing up. At worst you make no difference.

          So if you're paranoid, come up personally with a ridiculously long phrase (you don't need to remember it), feed it through a key derivation function, and use it in a stream cipher with proven security guarantees (in particular one that passes the next-bit test [wikipedia.org] for polynomial time). Instead of using this directly, xor it together with a source of hopefully random stuff.

          If you write to /dev/random this is more or less what happens. Write to it to your heart's content - it can only make it better, not worse. (This is as I recall, please check with an independent source before you try).

          Voila, no matter what NSA has done to your HRNG chip, this door is secured. Your time is better spent focusing on the other doors, or the windows.

          (But you should be very careful in using HRNG output directly. I am very surprised to read that some open source OSes disable the stream cipher if a HRNG is present - this is a very bad idea!)

          • by pspahn ( 1175617 ) on Tuesday October 15, 2013 @03:29AM (#45129739)

            Pick two random dates between 1950 (or earlier... arbitrary cut-off) and today. Then go to that date and find all the sports scores from that day. Do some random math on those scores independently. Then take those two results and do some more random math between the two.

            Add in more Nth days as you please.

            That enough entropy for you? I guess it might not be. I suppose you could also factor in which days of the week the Cleveland Browns are likely to win on, since that is definitely random.

      • by Anonymous Coward on Tuesday October 15, 2013 @01:12AM (#45129301)

        Good for you. That is still not a viable solution for generating cryptographic keys, IVs, salts, and so on. Two drawbacks with your idea:

        1. Too slow. You need far more random data than a zener diode can generate. You could combine many of them, but then you need to combine them in the right way.

        2. Unreliable. Zener diodes are easy to affect with temperature, and you need to make sure that hardware flaws don't make them produce 1 more often than 0 (or the other way around).

        This is why we need software RNGs. We take a good hardware-based seed from multiple sources (combined using SHA256 or something), and then use that to seed the CSPRNG (not just a PRNG). The CSPRNG then generates a very long stream of secure random data which can then be used.

        I'm not too pleased about the design of Fortuna, but it seems like one of the better choices for how to combine HW input and generate an output stream.

        • 2. Unreliable. Zener diodes are easy to affect with temperature, and you need to make sure that hardware flaws don't make them produce 1 more often than 0 (or the other way around).

          A good implementation assumes that there is an unknown bias and produces an unbiased (to within 50% +/- whatever margin you want) output anyways.

        • The TCO of a randomly measured Zener diode or other noise source is likely to be pretty darn close, cycle to cycle as it takes a bit of TCO delta to produce a lot of randomization change. Therefore NOT so easy to change unless you use a lot of temperature -- on the the same diode. Change the family, even the diode within a family (although less so in a batch) and you can get some delta.

          I agree to all suggestions that multiple hashing techniques can be used to prevent filling the entropy pools, and multiple

        • by Agripa ( 139780 )

          There are all kinds of ways to design the hardware to prevent problems besides the obvious of measuring various environmental variables like temperature.

          Feedback from the output of each slicer can adjust the comparison threshold to produce an output with an equal number of high and low states over an arbitrary period of time. This technique is commonly used for closed loop control of duty cycle in applications requiring precision waveform generation. The slicer threshold can be monitored as another health

      • by AmiMoJo ( 196126 ) *

        It seems unlikely that you would get good results from such a set up without whitening the data, but you don't mention doing that. Did you try it with anything other than the rather old Diehard tests? Do you know they are not an indication of cryptographicly secure randomness? Do you have a schematic that could be reviewed for errors?

        • I built this in the mid 90's, and at the time the old DOS-based Diehard tests were the best I could find. Certainly the stream should be whitened before used in cryptography, but the data directly from the board passed the Diehard tests without whitening. However, the board XORed 80 bits from the A/D to produce a single output bit (or more - this was software controllable). Since the 8-bit randomness accumulator was rotated 1 bit every sample, each bit was XORed 10 times with each of the A/D outputs. I

  • Dupe! (Score:5, Funny)

    by VanessaE ( 970834 ) on Monday October 14, 2013 @10:12PM (#45128423)

    .....wait! it's not what you think.

    "a new paper analyzes the robustness of /dev/urandom and /dev/urandom."

    So now we're putting the dupes together into the same summary? Jeez, can't we at least wait a few hours first?

    • Re: (Score:3, Funny)

      by Anonymous Coward

      Coincidence. They were chosen at random.

  • by fishnuts ( 414425 ) <fishnuts@arpa.org> on Monday October 14, 2013 @10:12PM (#45128427) Homepage

    At what scope/scale of time or range of values does it really matter if a PRNG is robust?
    A PRNG seeded by a computer's interrupt count, process activity, and sampled I/O traffic (such as audio input, fan speed sensors, keyboard/mouse input, which I believe is a common seeding method) is determined to be sufficiently robust if only polled once a second, or for only 8 bits of resolution, exactly how much less robust does it get if you poll the PRNG say, 1 million times per second, or in a tight loop? Does it get more or less robust when the PRNG is asked for a larger or smaller bit field?

    Unless I'm mistaken, the point is moot when the only cost of having a sufficiently robust PRNG is to wait for more entropy to be provided by its seeds or to use a larger modulus for its output, both rather trivial in the practical world of computing.

    • by jythie ( 914043 ) on Monday October 14, 2013 @10:57PM (#45128713)
      The headline is somewhat sensational. There is a pretty wide gulf between an abstract and rather arbitrary metric and a practical vulnerability. This is kinda the security equivalent of pixel peeping, a fun mathematical exercise at best and pissing contest at worst, but ultimately not all that important.
      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Right up until someone figures out how to use those "ultimately not all that important" technical weaknesses to good use. And then suddenly every single last deployed system might turn out to be vulnerable. Which might already be the case, but has been kept carefully secret for a rainy day.

        Of course, usually it's a lot of "theoretical" shadow dancing, and given the nature of the field some things will indubitably remain unclear forever (unless another Snowden stands up, who just happens to give us a clear-e

      • The headline is somewhat sensational. There is a pretty wide gulf between an abstract and rather arbitrary metric and a practical vulnerability. This is kinda the security equivalent of pixel peeping, a fun mathematical exercise at best and pissing contest at worst, but ultimately not all that important.

        I am definitely not a statistician; but there may be applications other than crypto and security-related randomization that break (or, rather worse, provide beautifully plausible wrong results) in the presence of a flawed RNG.

        • by Rockoon ( 1252108 ) on Tuesday October 15, 2013 @07:23AM (#45130581)
          Its trivial to show that typical PRNG's are frequently not good enough for the monte-carlo simulations and testing that they are thrown at. An N-bit PRNG can only produce at most half of all N+1 bit sequences, a quarter of all N+2 bit sequences, an eighth of all N+3 bit sequences, and so on....

          Given this, obviously the larger N is, the better. Of course most standard libraries use a 32-bit (or less) generator and most programmers are lazy or uneducated in the matter... so only half of all 8-billion to one shots are even possible with that 32-bit generator and then each can only be sampled if the generator just happens to be in the perfect 4-billion to one shot state....

          As the saying goes...Random numbers are too important to be left to chance.
      • by steelfood ( 895457 ) on Tuesday October 15, 2013 @12:45AM (#45129193)

        Useless for you. But the NSA might disagree. The math is what keeps them at bay. If the math shows cracks, it'd be certain that the NSA has figured out some kind of exploit. Keep in mind that the NSA doesn't rely on just one technique, but can aggregate multiple data sources. So those interrupts that the RNG relies on can be tracked, and the number that results can be narrowed to a searchable space. Keep in mind that 2^32, which is big by any human standard, is minuscule for a GPU.

        • by Xest ( 935314 ) on Tuesday October 15, 2013 @04:44AM (#45130047)

          Exactly. If the recent leaks have taught us anything it's that the NSA has managed to produce real working exploits where previously such issues have just been discarded as nothing to worry about because they were "only theory".

          At this point it's stupid to assume that just because you can't come up with a working exploit that someone with the resources the NSA has hasn't already.

          It's of course not even just the NSA people should worry about, it seems naive to think the Russians, Chinese et. al. haven't put similar resources into this sort of thing, the difference is they just haven't had their Snowden incidents yet. I'd imagine the Chinese and Russians have exploits for things the NSA hasn't managed to break as well as the NSA having exploits for things the Chinese and Russians haven't managed to break. Then there's the Israelis, the French, the British and many others.

          It's meaningless to separate theory and practice at this point. If there's a theoretical exploit then it should be fixed, because whilst it may just be theoretical to one person, it may not to a group of others.

      • by jhol13 ( 1087781 ) on Tuesday October 15, 2013 @01:44AM (#45129419)

        Your attitude is exactly what is wrong with security. Quite a few still use MD5 because "it is not that broken". Linus really should take a look in this new provably better method and adapt it ASAP and not wait until it bites hard.

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          Lots of people still use MD5, because it's widely available, fast, and good enough for what they need. Just like CRC32 is good enough for certain tasks. MD5 may be broken for cryptographic purposes, but it's still fine to use for things like key-value stores.

      • by fatphil ( 181876 )
        The abstract seems to be talking about recovery from state compromise.

        If you've got state compromise, you've already run code that you didn't want to run, and therefore you are not in control of your machine any more.

        Game over. They're logging your keystrokes too, probably.
    • by Anonymous Coward on Monday October 14, 2013 @11:41PM (#45128955)

      First of all, not all computers are PCs. A server running in a VM has no audio input, fan speed, keyboard, mouse, or other similar devices that are good sources of entropy. A household router appliance running Linux not only has no audio input, fan, keyboard, or mouse -- it doesn't even have a clock it can use as a last resort source of entropy.

      Second, there are many services that require entropy during system startup. At that point, there are very few interrupts, no mouse or keyboard input yet, and some of the sources of entropy may even be initialized yet.

      One problematic situation is a initializing a household router. On startup it needs to generate random keys for its encryption, TCP sequence numbers, and so on. Without a clock, a disk, a fan, or any peripherals, the only good source of entropy it has is network traffic, and there hasn't been any yet. A router with very little traffic on its network may take ages to get enough packets to make a decent amount of entropy.

      dom

      • by kscguru ( 551278 ) on Tuesday October 15, 2013 @02:03AM (#45129471)

        VMs do have good sources of entropy ... while a server indeed has no audio / fan / keyboard / mouse inputs (whether physical or virtual), a server most certainly does have a clock (several clocks: on x86, TSC + APIC + HPET). You can't use skew (as low-res clocks are implemented in terms of high-res clocks), but you still can use the absolute value on interrupts (and servers have a lot of NIC interrupts) for a few bits of entropy. Time is a pretty good entropy source, even in VMs: non-jittery time is just too expensive to emulate, the only people who would try are the blue-pill hide-that-you-are-in-a-VM security researchers.

        The real security concern with VMs is duplication ... if you clone a bunch of VMs but they start with the same entropy pool, then generate an SSL cert after clone, the other SSL certs will be easily predicted. (Feeling good about your EC2 instances, eh?) This isn't a new concern - cloning via Ghost has the same problem - but it's easier to get yourself into trouble with virtualization.

        • A VM is by definition running on a host that can skew what the VM sees and so is intrinsicly insecure ...

        • The real security concern with VMs is duplication ... if you clone a bunch of VMs but they start with the same entropy pool, then generate an SSL cert after clone, the other SSL certs will be easily predicted.

          Yeah, I encountered that the other day. Built a VM, took a snapshot, did some stuff, reverted, did the same stuff. I was testing a procedure doc I was writing. Part of the procedure was creating an SSL cert, and I got an identical one on both attempts. That seems a little fishy to me; I would expec

        • I think it is past time for CPUs to provide hardware random numbers. Via CPUs have done this [slashdot.org] for years, but Via CPUs are just too slow for most uses. (I used to run my mail server on a Via C3... I am a lot happier now that my server runs on an AMD low-power dual-core.)

          Recent Intel chips do have some sort of random number generator (RdRand [wikipedia.org]).

          Hardware RNG accessories are available but expensive [wikipedia.org].

          There is the LavaRnd [lavarnd.org] project, which I think is really darn cool. However, I downloaded the source code, and it hasn

          • by steveha ( 103154 )

            Entropy Broker: A project to allow one computer to act as a randomness server to others. Could use any actual hardware machine to seed any number of VMs.

            See also the comments in the LWN article, where someone with the user name "starlight" simply sends random data over SSH and then the receiving computer uses rngd to mix that data into the entropy pool. Simple, and simple is good.

            https://lwn.net/Articles/546428/ [lwn.net]

            http://www.vanheusden.com/entropybroker/ [vanheusden.com]

  • Yawn (Score:5, Interesting)

    by Anonymous Coward on Monday October 14, 2013 @10:16PM (#45128453)

    The output of a software RNG, aka PRNG (pseudo random number generator), is completely determined by a seed. In other words, to a computer (or an attacker), what looks like a random sequence of numbers is no more random than, let's say,

    (2002, 2004, 2006, 2008, 2010, 2012...)

    However, the PRNG sequence is often sufficiently hashed up for many applications such as Monte Carlo simulations.

    When it comes to secure applications such as cryptography and Internet gambling, things are different. Now a single SRNG sequence is pathetically vulnerable and one needs to combine multiple SRNG sequences, using seeds that are somehow independently produced, to provide a combined stream that hopefully has acceptable security. But using COTS PC or phone doesn't allow developers to create an arbitary stream of independent RNG seeds, so various latency tricks are used. In general, these tricks can be defeated by sufficient research, so often a secure service relies partly on "security through obscurity", i.e. not revealing the precise techniques for generating the seeds.

    This is hardly news. For real security you need specialized hardware devices.

    • Re:Yawn (Score:4, Funny)

      by mrchaotica ( 681592 ) * on Monday October 14, 2013 @10:26PM (#45128515)

      For real security you need specialized hardware devices.

      Yep. I think it's about time to hook up the ol' lava lamp [lavarnd.org].

    • Re:Yawn (Score:4, Informative)

      by Anonymous Coward on Tuesday October 15, 2013 @12:00AM (#45129031)

      "For real security you need specialized hardware devices."

      Indeed. And it's worth considering that the Raspberry Pi has a Hardware RNG built in. Also, the tools to make it functional are in the latest updates to Raspian...

      Did I mention that an RPi B is actually cheaper than the least expensive HWRNG-device (the Simtec Entropy Key - which is waaaayyyy backordered at the moment) - and about three times faster?

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        So does Intel CPUs.

        The previous discussion was about whether or not to remove the Intel hardware RNG from the equation, because Intel is an American company, and as such subject to NSA requirements. I.e, nobody knows if the Intel hardware RNG is a true random number generator, or a pseudo random number generator with a secret key that only the NSA knows.

        (Some would claim that such a suggestion is tinfoil hat material, but lately, Edward Snowden has been making the tinfoil had crowd say "damn, it's worse tha

      • by gl4ss ( 559668 )

        you can check how a sw random works.

        you can not check, generally, check how a hw random built into a chip works.

        and heck... if the one in the rasp is good enough for you then probably the intel equivalent should suffice. however what good are real random numbers when you want a repeatable string of random numbers for which the seed is hard to guess from the numbers...

  • by Anonymous Coward on Monday October 14, 2013 @10:24PM (#45128509)

    So, with all the 'revelations' and discussion surrounding this and encryption over the past several weeks, I've been wondering if a local hardware based entropy solution could be developed. By 'solution', I mean an actual piece of hardware that takes various static noise from your immediate area, ranging from 0-40+kHz( or into MHz or greater?), both internal and external to case, and with that noise builds a pool for what we use as /dev/random and /dev/urandom. Perhaps each user would decide what frequencies to use, with varying degrees of percentage to the pool, etc.. etc...

    It just seems that with so much 'noise' going on around us in our everyday environments, that we have an oppurtunity to use some of that as an entropy source. Is anyone doing this, cause it seems like a fairly obvious implementation.

    • I linked to LavaRnd in a reply to an earlier post, but at the risk of being redundant, I'll mention it again [lavarnd.org].

    • Unless this abstract can be translated into an actual attack of some sort, the chances are mostly in the area of creating new holes, not really in improving security.

    • Greatest implimentation of a RNG I have seen. Not really realistic to have in every server room. http://gamesbyemail.com/News/DiceOMatic [gamesbyemail.com]
    • by WaywardGeek ( 1480513 ) on Tuesday October 15, 2013 @12:30AM (#45129113) Journal

      Yes. Hardware high-speed super-random number generators are trivial. I did it with amplified zener noise through an 8-bit 40MHz A/D XORed onto an 8-bit ring-shift register, generating 0.5MB/second of random noise that the Diehard tests could not differentiate from truly random numbers. XOR that with an ARC4 stream, just in case there's some slight non-randomness, and you're good to go. This is not rocket science.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        RC4 is cracked, and a 120MHz radio signal can bias your generator.

        (That wiped the smile off your face, didn't it!)

        Two unstable Os, at different frequencies, work well, with an appropriate whitener and shielding. You need to determine when measurable entropy falls below a certain level from raw and cease. Use that to seed a pure CSPRNG - the Intel generator uses this construct with the DRBG_AES_CTR (which seems to be OK, if you trust AES, unlike the hairbrained and obviously backdoored DRBG_Dual_EC). You als

        • No, a 120MHz radio signal cannot bias my generator unless the signal is so strong that the noise signal goes outside the ADC input range. Simply adding non-random bias to a random signal does not reduce the randomness in the signal. Simply XORing all the ADC output bits together for 80 cycles, where the low 4 bits correlate sample-to-sample less than 1%, and you get an error of less than 1 part in 10^50 of non-randomness. Add any signal you like to the noise, and it wont make any difference, so long as

      • by AmiMoJo ( 196126 ) *

        Dual_EC_DRBG did well against Diehard but is known to be backdoored by the NSA. The Linux PRNG does well against it too. In other words Diehard is known to be a poor test for a cryptographically secure PRNG.

        Unfortunately there is no simple suite of tests you can perform to make this determination. Zener noise is a good source of entropy but the chances of your A/D being unbiased, or that XORing with an ARC4 stream is enough to remove the bias completely is slim. At best you created another useful source of

  • These guys had the time, brains and resources available to do a full breakdown of how /dev/random might not be so random.... why not go all the way, submit a patch and fix it?

    • It might have been a scholarly F-U to Linus, if so, why would they fix it?
    • by Z00L00K ( 682162 ) on Tuesday October 15, 2013 @02:22AM (#45129537) Homepage Journal

      "Not so random" means that you can mathematically calculate how likely it is that you can predict the next number over a long time. If you can predict the next number with an accuracy of 1 in 250 while the random generator provides 1 in 1000 then the random generator isn't that random.

      Many random generators picks the previous value as a seed for the next value, but that is definitely predictable. Introduce some additional factors into the equation and you lower the predictability. One real problem with random generators using previous value as a seed without adding a second factor is that they can't generate the same number twice or three times in a row (which actually is allowed under the randomness rules).

      It's a completely different thing to create a true random number. For a 32 bit number you essentially should have one generator source for each bit that don't care about how the bit was set previously. It is a bit tricky to create that in a computer in a way that also allows for fast access to a large number of random numbers and prevent others from reading the same random number as you do.

      For computers it's more a question of "good enough" to make prediction an unfeasible attack vector.

    • by Xest ( 935314 )

      Because whilst they may have had the time and resources to research it and come up with a solution, they don't necessarily have the time and resources to fight their patch past Commander Torvald's ego and army of protective zealots.

      Many bright academics simply have better things to do than get involved in that sort of childishness. Research is what they enjoy and they have neither the time nor interest in political bickering to get involved, especially if it flies in the face of logic - i.e. someone refusin

      • Because whilst they may have had the time and resources to research it and come up with a solution, they don't necessarily have the time and resources to fight their patch past Commander Torvald's ego and army of protective zealots.

        Or maybe now they have actually created a paper that may or may not define a real issue (I'll wait and see how this is peer reviewed as I have no idea about theoretical weaknesses created by different ways of combining sources of entropy in order to create random data) with the kernel in depth (not just some moronic bullshit petition based on a completely incorrect assessment of how someone guessed something would work with no real investigation of the actual system involved) the kernel team will actually l

        • by Xest ( 935314 )

          You're just focussing on one incident though, and there have been hundreds over the years. Often Linus is right and is just dealing with an idiot, but sometimes he's not.

          But I'm simply reasoning a response to the GP's post as to why people may wish to steer clear of contributing a fix once they've written a paper. If a clique is unnecessarily cruel to anyone, no matter how stupid that person, then that's going to put people off interacting with it directly. As far back as the original Tanenbaum debate he's

  • by OhANameWhatName ( 2688401 ) on Monday October 14, 2013 @10:30PM (#45128549)

    From a practical side, we also give a precise assessment of the security of the two Linux PRNGs, /dev/random and /dev/urandom

    The internet must have finally run out of porn.

  • by Anonymous Coward on Monday October 14, 2013 @10:38PM (#45128605)

    I swear, if I worked for the NSA I'd be pushing out headlines like this to make people ignore real security issues...

    The article is a highly academic piece that analyzes the security of the linux rng against a bizarre and probably pointless criteria: What is an attacker's ability to predict the future output of the RNG assuming he knows the entire state of your memory at arbitrary attacker selected points in time and can add inputs to the RNG. Their analysis that the linux rng is insecure under this (rather contrived) model rests on an _incorrect_ assumption that Linux stops adding to the entropy pool when the estimator concludes that the entropy pool is full. Instead they offer the laughable suggestion of using AES in counter mode as a "provably secure" alternative.

    (presumably they couldn't get a paper published that said "don't stop adding entropy just because you think the pool is at maximum entropy", either because it was too obviously good a solution or because their reviewers might have noticed that Linux already did that)

    • Also, any measurement that they use to know the state will change the state
    • by FrangoAssado ( 561740 ) on Monday October 14, 2013 @11:24PM (#45128861)

      Their analysis that the linux rng is insecure under this (rather contrived) model rests on an _incorrect_ assumption that Linux stops adding to the entropy pool when the estimator concludes that the entropy pool is full.

      Exactly. The maintainer of the /dev/random driver explained this and a lot more about this paper here [ycombinator.com].

    • Yeah, because random number generators is like pr0n for the masses, and will distract them from boring stuff like the government monitoring and storing that really stupid thing they said about their boss, right next to their cybersex session with their mistress.

  • by mark_reh ( 2015546 ) on Monday October 14, 2013 @10:41PM (#45128629) Journal

    zener diodes biased into avalanche mode to generate random noise? I don't think even the NSA has figured out how to hack laws of thermodynamics.

    • Somebody's gotta implement it in hardware. Do you trust Intel or AMD? I don't. If I can run an OSS analyzer on it and the results come out positive, I might be convinced. But I'm not sure this feature even exists for any consumer chips.

    • by gman003 ( 1693318 ) on Tuesday October 15, 2013 @12:52AM (#45129237)

      Well, Intel and VIA have such things integrated into their processors now. Unfortunately, they (at least Intel - not sure how VIA's implementation worked) decided to whiten the data in firmware - you run a certain instruction that gives you a "random" number, instead of just polling the diode. With all the current furor over the NSA stuff, many people are claiming that it *is* hacked.

      • Without whitening you're likely to get biased output, which is undesirable in most of the contexts where you'd want a truly random number in the first place. You can do whitening in software, which is fine for things like seeding a PRNG where you've already got a lot of code related to managing incoming entropy, but it makes the instruction difficult to use in other contexts where you just want a random number for direct consumption. On the other hand, pre-whitened data is just as useful for PRNG seeds *and

      • by AmiMoJo ( 196126 ) *

        Even if they didn't whiten the data you could never trust that they didn't bias the measurement somehow. Besides which TFA isn't questioning the sources of entropy, of which there are many in a typical Linux system, it is the way that they are combined.

  • by Anonymous Coward

    He "confirmed" he'd been asked to backdoor linux, he never confirmed whether or not he agreed... :)

  • by gweihir ( 88907 ) on Monday October 14, 2013 @11:59PM (#45129021)

    If the CPRNG state is not compromised, the Linux random generators are secure. In fact the property required for robustness is quite strong: Recovery from compromise even if entropy is only added slowly. For example, the OpenSSL generator also does not fulfill the property. Fortuna does, as it is explicitly designed for this scenario.

    I also have to say that the paper is not well-written as the authors seem to believe the more complicated formalism used the better. This may also explain why there is no analysis of the practical impact: The authors seem to not understand what "practical" means and why it is important.

  • by Mister Liberty ( 769145 ) on Tuesday October 15, 2013 @12:45AM (#45129191)

    has some thoughts on the study and the subject:
    https://www.schneier.com/blog/archives/2013/10/insecurities_in.html [schneier.com]

  • I think the only question on my mind is what exactly is deemed insecure for? Generating public/private key pairs? Doing encryption for SSL/TLS?

    I've been around computers for a good number of years and I know no computer can be truly random, but isn't there a point where we say, "It's random enough."? Is this OP saying.. Linux's RNG isn't "Random Enough." and my question is.. what isn't it random enough for?

  • Panic! (Score:2, Funny)

    by Anonymous Coward

    I'm having a security panic over here!

    as a quick fix I deleted /dev/random and did ln -s /dev/zero /dev/random

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...