Linux RNG May Be Insecure After All 240
Okian Warrior writes "As a followup to Linus's opinion about people skeptical of the Linux random number generator, a new paper analyzes the robustness of /dev/urandom and /dev/random . From the paper: 'From a practical side, we also give a precise assessment of the security of the two Linux PRNGs, /dev/random and /dev/urandom. In particular, we show several attacks proving that these PRNGs are not robust according to our definition, and do not accumulate entropy properly. These attacks are due to the vulnerabilities of the entropy estimator and the internal mixing function of the Linux PRNGs. These attacks against the Linux PRNG show that it does not satisfy the "robustness" notion of security, but it remains unclear if these attacks lead to actual exploitable vulnerabilities in practice.'"
Of course, you might not even be able to trust hardware RNGs. Rather than simply proving that the Linux PRNGs are not robust thanks to their run-time entropy estimator, the authors provide a new property for proving the robustness of the entropy accumulation stage of a PRNG, and offer an alternative PRNG model and proof that is both robust and more efficient than the current Linux PRNGs.
Random number generators are hard (Score:2, Insightful)
Linus always says Linux is perfect. Linus can be wrong.
many times a day he says Linux needs changes (Score:5, Informative)
Linus signs off on many changes everyday. He does expect you to read the code before trying to change it. That was the problem before - someone put up a change.org petition that made clear they had no idea how it worked.
Re: (Score:3)
I just read TFA and associated paper, and the petition. Linus was right, the petition was pointless, and motivated by confusion on the petitioner's part. However, the paper points out some scary issues in the Linux PRNG. It's tin-foil hat stuff, but it shows how one user on a Linux system could write a malicious program that would drain the entropy pools, and then feed the entropy pool non-random data which Linux would estimate as very random. If this attack were done just before a user uses /dev/random
Re:Random number generators are hard (Score:5, Interesting)
No, RNGs are easy. Super easy. Just take a trustworthy source of noise, such as zener diode noise, and accumulate it with XOR operations. I built a 1/2 megabyte/second RNG that exposed a flaw in the Diehard RNG test suite. All it took was a 40 MHz 8-bit A/D conversion of amplified zener noise XORed into an 8-bit circular shift register . The Diehard tests took 10 megabytes and said if it found a problem. My data passed several times, so I ran it thousands of times, and found one test sometimes failed on my RNG data. Turns out the Diehard tests had a bug in that code. Sometimes the problem turns out to be in the test, not the hardware.
Re: Random number generators are hard (Score:5, Insightful)
No, RNGs are easy. Super easy. Just take a trustworthy source of noise
Therein lies the tricky part. Getting a trustworthy source of noise is harder than you may think. Especially when you're writing software with no control over the hardware it runs on.
Re: Random number generators are hard (Score:5, Insightful)
The nice thing about randomness though, is that it adds up. If you xor one stream of hopefully random bits with another stream of hopefully random bits, you get a result that is at least as random as the best of the two streams, quite possibly better than either. It's a rare and precious thing in cryptography: something you can't make worse by messing up. At worst you make no difference.
So if you're paranoid, come up personally with a ridiculously long phrase (you don't need to remember it), feed it through a key derivation function, and use it in a stream cipher with proven security guarantees (in particular one that passes the next-bit test [wikipedia.org] for polynomial time). Instead of using this directly, xor it together with a source of hopefully random stuff.
If you write to /dev/random this is more or less what happens. Write to it to your heart's content - it can only make it better, not worse. (This is as I recall, please check with an independent source before you try).
Voila, no matter what NSA has done to your HRNG chip, this door is secured. Your time is better spent focusing on the other doors, or the windows.
(But you should be very careful in using HRNG output directly. I am very surprised to read that some open source OSes disable the stream cipher if a HRNG is present - this is a very bad idea!)
Re: Random number generators are hard (Score:4, Funny)
Pick two random dates between 1950 (or earlier... arbitrary cut-off) and today. Then go to that date and find all the sports scores from that day. Do some random math on those scores independently. Then take those two results and do some more random math between the two.
Add in more Nth days as you please.
That enough entropy for you? I guess it might not be. I suppose you could also factor in which days of the week the Cleveland Browns are likely to win on, since that is definitely random.
Re:Random number generators are hard (Score:4, Informative)
Good for you. That is still not a viable solution for generating cryptographic keys, IVs, salts, and so on. Two drawbacks with your idea:
1. Too slow. You need far more random data than a zener diode can generate. You could combine many of them, but then you need to combine them in the right way.
2. Unreliable. Zener diodes are easy to affect with temperature, and you need to make sure that hardware flaws don't make them produce 1 more often than 0 (or the other way around).
This is why we need software RNGs. We take a good hardware-based seed from multiple sources (combined using SHA256 or something), and then use that to seed the CSPRNG (not just a PRNG). The CSPRNG then generates a very long stream of secure random data which can then be used.
I'm not too pleased about the design of Fortuna, but it seems like one of the better choices for how to combine HW input and generate an output stream.
Re: (Score:2)
2. Unreliable. Zener diodes are easy to affect with temperature, and you need to make sure that hardware flaws don't make them produce 1 more often than 0 (or the other way around).
A good implementation assumes that there is an unknown bias and produces an unbiased (to within 50% +/- whatever margin you want) output anyways.
Re: (Score:2)
The TCO of a randomly measured Zener diode or other noise source is likely to be pretty darn close, cycle to cycle as it takes a bit of TCO delta to produce a lot of randomization change. Therefore NOT so easy to change unless you use a lot of temperature -- on the the same diode. Change the family, even the diode within a family (although less so in a batch) and you can get some delta.
I agree to all suggestions that multiple hashing techniques can be used to prevent filling the entropy pools, and multiple
Re: (Score:3)
There are all kinds of ways to design the hardware to prevent problems besides the obvious of measuring various environmental variables like temperature.
Feedback from the output of each slicer can adjust the comparison threshold to produce an output with an equal number of high and low states over an arbitrary period of time. This technique is commonly used for closed loop control of duty cycle in applications requiring precision waveform generation. The slicer threshold can be monitored as another health
Re: (Score:2)
It seems unlikely that you would get good results from such a set up without whitening the data, but you don't mention doing that. Did you try it with anything other than the rather old Diehard tests? Do you know they are not an indication of cryptographicly secure randomness? Do you have a schematic that could be reviewed for errors?
Re: (Score:2)
I built this in the mid 90's, and at the time the old DOS-based Diehard tests were the best I could find. Certainly the stream should be whitened before used in cryptography, but the data directly from the board passed the Diehard tests without whitening. However, the board XORed 80 bits from the A/D to produce a single output bit (or more - this was software controllable). Since the 8-bit randomness accumulator was rotated 1 bit every sample, each bit was XORed 10 times with each of the A/D outputs. I
Re: Random number generators are hard (Score:4, Informative)
Not all Linux systems use the GNU userspace.
Re: Random number generators are hard (Score:5, Informative)
Android. Many embedded systems. Many micro systems, such as tomsrtbt or similar (now virtually unneeded, due to the lack of floppy discs on new computers, and the prevalence of booting of CDs or USB flash drives). Many lightweight systems [wikipedia.org], such as Damn Small Linux.
Etc.
See also: Toybox.
Re: (Score:3, Interesting)
I don't know of any distros that completely eschew GNU. The two are very tightly integrated, originally the kernel was written to run gnu, and even now, it cannot be built but with gcc. While I believe that they have done much to contribute to the free software we care about, loads of people here on /. have some disdain for gnu, and with that in mind might want to ban it from their systems. You can do this by replacing the individual components. BSD userland has been successfully ported to linux, and with P
Re: (Score:3)
Plan 9 failed simply because it fell short of being a compelling enough improvement on Unix to displace its ancestor. Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough.
â"Eric S. Raymond[3]
Dilbert RNG (Score:5, Funny)
Re:Dilbert RNG (Score:5, Funny)
Re:Dilbert RNG (Score:5, Funny)
I didn't even click on the link and knew it was some fag linking xkcd.
Well, it is a link that leads to xkcd.com, so it's not exactly difficult to figure out that that's where the link leads.
Re: (Score:3, Funny)
True slashdotters do not read the links either.
Re:Dilbert RNG (Score:5, Funny)
http://www.penny-arcade.com/comic/2004/03/19 [penny-arcade.com]
Re:Dilbert RNG (Score:5, Insightful)
I think you need to re-assess your attitude. Perhaps some people have not seen those links? Did you consider the possibility that the comment was not for you? Like rhetorical questions?
Lighten up or Fuck off. You are taking this way too seriously.
Re: (Score:2)
Keep up the good work [xkcd.com]
Re: (Score:2)
I didn't even click on the link and knew it was some fag linking xkcd. It's not clever. It's not funny. Just the subject containing something about RNG, with a link under it and not even a short, useless, one sentence post shows the kind of unoriginal, uninspired, idiot is making the post. It was funny to read when it came out. It's even funny when clicking on the Random button on the site and seeing it. It's NOT funny when someone links to it from a one-sentence post and thinks they're so fucking clever to have discovered xkcd. You probably still use lmgtfy and think you're so damn clever. It means in real life, you're an unoriginal hipster doofus. Got anything to do with sanitizing inputs to a SQL database, etc.? Link to Bobby Tables. Got a nerd-project slow-ass turing machine? Like a minecraft logic circuit from redstone? Link to the one where it's some guy alone in the world making a computer out of rocks. Got a story about password security or encryption? Link to the one where they beat the password out of the guy with a wrench. Fuck off. You're not clever.
http://xkcd.com/1053/ [xkcd.com]
Guess you're not one of today's 10000. Thanks for playing.
Dupe! (Score:5, Funny)
.....wait! it's not what you think.
"a new paper analyzes the robustness of /dev/urandom and /dev/urandom."
So now we're putting the dupes together into the same summary? Jeez, can't we at least wait a few hours first?
Re: (Score:3, Funny)
Coincidence. They were chosen at random.
At what scope of time or size of output data? (Score:5, Insightful)
At what scope/scale of time or range of values does it really matter if a PRNG is robust?
A PRNG seeded by a computer's interrupt count, process activity, and sampled I/O traffic (such as audio input, fan speed sensors, keyboard/mouse input, which I believe is a common seeding method) is determined to be sufficiently robust if only polled once a second, or for only 8 bits of resolution, exactly how much less robust does it get if you poll the PRNG say, 1 million times per second, or in a tight loop? Does it get more or less robust when the PRNG is asked for a larger or smaller bit field?
Unless I'm mistaken, the point is moot when the only cost of having a sufficiently robust PRNG is to wait for more entropy to be provided by its seeds or to use a larger modulus for its output, both rather trivial in the practical world of computing.
Re:At what scope of time or size of output data? (Score:5, Insightful)
Re: (Score:2, Interesting)
Right up until someone figures out how to use those "ultimately not all that important" technical weaknesses to good use. And then suddenly every single last deployed system might turn out to be vulnerable. Which might already be the case, but has been kept carefully secret for a rainy day.
Of course, usually it's a lot of "theoretical" shadow dancing, and given the nature of the field some things will indubitably remain unclear forever (unless another Snowden stands up, who just happens to give us a clear-e
Re: (Score:3)
The headline is somewhat sensational. There is a pretty wide gulf between an abstract and rather arbitrary metric and a practical vulnerability. This is kinda the security equivalent of pixel peeping, a fun mathematical exercise at best and pissing contest at worst, but ultimately not all that important.
I am definitely not a statistician; but there may be applications other than crypto and security-related randomization that break (or, rather worse, provide beautifully plausible wrong results) in the presence of a flawed RNG.
Re:At what scope of time or size of output data? (Score:5, Insightful)
Given this, obviously the larger N is, the better. Of course most standard libraries use a 32-bit (or less) generator and most programmers are lazy or uneducated in the matter... so only half of all 8-billion to one shots are even possible with that 32-bit generator and then each can only be sampled if the generator just happens to be in the perfect 4-billion to one shot state....
As the saying goes...Random numbers are too important to be left to chance.
Re:At what scope of time or size of output data? (Score:5, Interesting)
Useless for you. But the NSA might disagree. The math is what keeps them at bay. If the math shows cracks, it'd be certain that the NSA has figured out some kind of exploit. Keep in mind that the NSA doesn't rely on just one technique, but can aggregate multiple data sources. So those interrupts that the RNG relies on can be tracked, and the number that results can be narrowed to a searchable space. Keep in mind that 2^32, which is big by any human standard, is minuscule for a GPU.
Re:At what scope of time or size of output data? (Score:4, Insightful)
Exactly. If the recent leaks have taught us anything it's that the NSA has managed to produce real working exploits where previously such issues have just been discarded as nothing to worry about because they were "only theory".
At this point it's stupid to assume that just because you can't come up with a working exploit that someone with the resources the NSA has hasn't already.
It's of course not even just the NSA people should worry about, it seems naive to think the Russians, Chinese et. al. haven't put similar resources into this sort of thing, the difference is they just haven't had their Snowden incidents yet. I'd imagine the Chinese and Russians have exploits for things the NSA hasn't managed to break as well as the NSA having exploits for things the Chinese and Russians haven't managed to break. Then there's the Israelis, the French, the British and many others.
It's meaningless to separate theory and practice at this point. If there's a theoretical exploit then it should be fixed, because whilst it may just be theoretical to one person, it may not to a group of others.
Re:At what scope of time or size of output data? (Score:5, Insightful)
Your attitude is exactly what is wrong with security. Quite a few still use MD5 because "it is not that broken". Linus really should take a look in this new provably better method and adapt it ASAP and not wait until it bites hard.
Re: (Score:2, Insightful)
Lots of people still use MD5, because it's widely available, fast, and good enough for what they need. Just like CRC32 is good enough for certain tasks. MD5 may be broken for cryptographic purposes, but it's still fine to use for things like key-value stores.
Re: (Score:2)
If you've got state compromise, you've already run code that you didn't want to run, and therefore you are not in control of your machine any more.
Game over. They're logging your keystrokes too, probably.
Re:At what scope of time or size of output data? (Score:5, Interesting)
First of all, not all computers are PCs. A server running in a VM has no audio input, fan speed, keyboard, mouse, or other similar devices that are good sources of entropy. A household router appliance running Linux not only has no audio input, fan, keyboard, or mouse -- it doesn't even have a clock it can use as a last resort source of entropy.
Second, there are many services that require entropy during system startup. At that point, there are very few interrupts, no mouse or keyboard input yet, and some of the sources of entropy may even be initialized yet.
One problematic situation is a initializing a household router. On startup it needs to generate random keys for its encryption, TCP sequence numbers, and so on. Without a clock, a disk, a fan, or any peripherals, the only good source of entropy it has is network traffic, and there hasn't been any yet. A router with very little traffic on its network may take ages to get enough packets to make a decent amount of entropy.
dom
Re:At what scope of time or size of output data? (Score:4)
VMs do have good sources of entropy ... while a server indeed has no audio / fan / keyboard / mouse inputs (whether physical or virtual), a server most certainly does have a clock (several clocks: on x86, TSC + APIC + HPET). You can't use skew (as low-res clocks are implemented in terms of high-res clocks), but you still can use the absolute value on interrupts (and servers have a lot of NIC interrupts) for a few bits of entropy. Time is a pretty good entropy source, even in VMs: non-jittery time is just too expensive to emulate, the only people who would try are the blue-pill hide-that-you-are-in-a-VM security researchers.
The real security concern with VMs is duplication ... if you clone a bunch of VMs but they start with the same entropy pool, then generate an SSL cert after clone, the other SSL certs will be easily predicted. (Feeling good about your EC2 instances, eh?) This isn't a new concern - cloning via Ghost has the same problem - but it's easier to get yourself into trouble with virtualization.
Re: (Score:2)
A VM is by definition running on a host that can skew what the VM sees and so is intrinsicly insecure ...
Re: (Score:2)
Yeah, I encountered that the other day. Built a VM, took a snapshot, did some stuff, reverted, did the same stuff. I was testing a procedure doc I was writing. Part of the procedure was creating an SSL cert, and I got an identical one on both attempts. That seems a little fishy to me; I would expec
Hardware RNG for servers and VMs (Score:3)
I think it is past time for CPUs to provide hardware random numbers. Via CPUs have done this [slashdot.org] for years, but Via CPUs are just too slow for most uses. (I used to run my mail server on a Via C3... I am a lot happier now that my server runs on an AMD low-power dual-core.)
Recent Intel chips do have some sort of random number generator (RdRand [wikipedia.org]).
Hardware RNG accessories are available but expensive [wikipedia.org].
There is the LavaRnd [lavarnd.org] project, which I think is really darn cool. However, I downloaded the source code, and it hasn
Re: (Score:3)
Entropy Broker: A project to allow one computer to act as a randomness server to others. Could use any actual hardware machine to seed any number of VMs.
See also the comments in the LWN article, where someone with the user name "starlight" simply sends random data over SSH and then the receiving computer uses rngd to mix that data into the entropy pool. Simple, and simple is good.
https://lwn.net/Articles/546428/ [lwn.net]
http://www.vanheusden.com/entropybroker/ [vanheusden.com]
Re:At what scope of time or size of output data? (Score:5, Insightful)
It probably has a radio with signals of varying strengths and packet losses.
This is true. Although I suspect a lot of the time the CPU isn't going to get to see a lot of that stuff (although with most wireless chipsets being softmac devices, maybe it does?)
It also probably has a multitude of routed, nonrouted, and broadcast packets on its various network interfaces.
In a dark-start situation (coming up from a wide area power outage), possibly not. Imagine an ADSL router whereby all the clients are connected via wireless. The clients can't talk to the router until it has managed to initialise the wireless encryption subsystems, which requires entropy. Even on a wired network, you may well only see a few DHCP requests from workstations. Obviously rebooting a router onto a large active network is different to rebooting the entire network at the same time, as would happen in a power outage.
And on top of that, it's connected to a global network of nodes with varying packet delivery times, and where at any time it can ask for a multitude of continuously changing and a least partially stochastic metrics (e.g. exchange rates, 4th word of news headlines, youtube +1 counts, etc, etc).
A global network that it may well be unable to access until it has enough entropy to make a cryptographic handshake with the upstream peer.
Yawn (Score:5, Interesting)
The output of a software RNG, aka PRNG (pseudo random number generator), is completely determined by a seed. In other words, to a computer (or an attacker), what looks like a random sequence of numbers is no more random than, let's say,
(2002, 2004, 2006, 2008, 2010, 2012...)
However, the PRNG sequence is often sufficiently hashed up for many applications such as Monte Carlo simulations.
When it comes to secure applications such as cryptography and Internet gambling, things are different. Now a single SRNG sequence is pathetically vulnerable and one needs to combine multiple SRNG sequences, using seeds that are somehow independently produced, to provide a combined stream that hopefully has acceptable security. But using COTS PC or phone doesn't allow developers to create an arbitary stream of independent RNG seeds, so various latency tricks are used. In general, these tricks can be defeated by sufficient research, so often a secure service relies partly on "security through obscurity", i.e. not revealing the precise techniques for generating the seeds.
This is hardly news. For real security you need specialized hardware devices.
Re:Yawn (Score:4, Funny)
Yep. I think it's about time to hook up the ol' lava lamp [lavarnd.org].
Re: (Score:2)
Re: (Score:2)
Or the dice-o-matic [gamesbyemail.com] :-)
Re: (Score:2)
LavaRnd works based on a CCD in a dark can producing noise, not a lava lamp.
Re:Yawn (Score:4, Informative)
"For real security you need specialized hardware devices."
Indeed. And it's worth considering that the Raspberry Pi has a Hardware RNG built in. Also, the tools to make it functional are in the latest updates to Raspian...
Did I mention that an RPi B is actually cheaper than the least expensive HWRNG-device (the Simtec Entropy Key - which is waaaayyyy backordered at the moment) - and about three times faster?
Re: (Score:2, Interesting)
So does Intel CPUs.
The previous discussion was about whether or not to remove the Intel hardware RNG from the equation, because Intel is an American company, and as such subject to NSA requirements. I.e, nobody knows if the Intel hardware RNG is a true random number generator, or a pseudo random number generator with a secret key that only the NSA knows.
(Some would claim that such a suggestion is tinfoil hat material, but lately, Edward Snowden has been making the tinfoil had crowd say "damn, it's worse tha
Re: (Score:2)
you can check how a sw random works.
you can not check, generally, check how a hw random built into a chip works.
and heck... if the one in the rasp is good enough for you then probably the intel equivalent should suffice. however what good are real random numbers when you want a repeatable string of random numbers for which the seed is hard to guess from the numbers...
Hear me out: Locally Generated Entropy Pool (Score:4, Interesting)
So, with all the 'revelations' and discussion surrounding this and encryption over the past several weeks, I've been wondering if a local hardware based entropy solution could be developed. By 'solution', I mean an actual piece of hardware that takes various static noise from your immediate area, ranging from 0-40+kHz( or into MHz or greater?), both internal and external to case, and with that noise builds a pool for what we use as /dev/random and /dev/urandom. Perhaps each user would decide what frequencies to use, with varying degrees of percentage to the pool, etc.. etc...
It just seems that with so much 'noise' going on around us in our everyday environments, that we have an oppurtunity to use some of that as an entropy source. Is anyone doing this, cause it seems like a fairly obvious implementation.
Re: (Score:3)
I linked to LavaRnd in a reply to an earlier post, but at the risk of being redundant, I'll mention it again [lavarnd.org].
Re: (Score:2)
Unless this abstract can be translated into an actual attack of some sort, the chances are mostly in the area of creating new holes, not really in improving security.
Re: (Score:3)
Re:Hear me out: Locally Generated Entropy Pool (Score:4, Informative)
Yes. Hardware high-speed super-random number generators are trivial. I did it with amplified zener noise through an 8-bit 40MHz A/D XORed onto an 8-bit ring-shift register, generating 0.5MB/second of random noise that the Diehard tests could not differentiate from truly random numbers. XOR that with an ARC4 stream, just in case there's some slight non-randomness, and you're good to go. This is not rocket science.
Re: (Score:2, Insightful)
RC4 is cracked, and a 120MHz radio signal can bias your generator.
(That wiped the smile off your face, didn't it!)
Two unstable Os, at different frequencies, work well, with an appropriate whitener and shielding. You need to determine when measurable entropy falls below a certain level from raw and cease. Use that to seed a pure CSPRNG - the Intel generator uses this construct with the DRBG_AES_CTR (which seems to be OK, if you trust AES, unlike the hairbrained and obviously backdoored DRBG_Dual_EC). You als
Re: (Score:2)
No, a 120MHz radio signal cannot bias my generator unless the signal is so strong that the noise signal goes outside the ADC input range. Simply adding non-random bias to a random signal does not reduce the randomness in the signal. Simply XORing all the ADC output bits together for 80 cycles, where the low 4 bits correlate sample-to-sample less than 1%, and you get an error of less than 1 part in 10^50 of non-randomness. Add any signal you like to the noise, and it wont make any difference, so long as
Re: (Score:2)
Dual_EC_DRBG did well against Diehard but is known to be backdoored by the NSA. The Linux PRNG does well against it too. In other words Diehard is known to be a poor test for a cryptographically secure PRNG.
Unfortunately there is no simple suite of tests you can perform to make this determination. Zener noise is a good source of entropy but the chances of your A/D being unbiased, or that XORing with an ARC4 stream is enough to remove the bias completely is slim. At best you created another useful source of
Easy, fast random numbers (Score:2)
So write a better one (Score:2)
These guys had the time, brains and resources available to do a full breakdown of how /dev/random might not be so random.... why not go all the way, submit a patch and fix it?
Re: (Score:2)
Re:So write a better one (Score:5, Informative)
"Not so random" means that you can mathematically calculate how likely it is that you can predict the next number over a long time. If you can predict the next number with an accuracy of 1 in 250 while the random generator provides 1 in 1000 then the random generator isn't that random.
Many random generators picks the previous value as a seed for the next value, but that is definitely predictable. Introduce some additional factors into the equation and you lower the predictability. One real problem with random generators using previous value as a seed without adding a second factor is that they can't generate the same number twice or three times in a row (which actually is allowed under the randomness rules).
It's a completely different thing to create a true random number. For a 32 bit number you essentially should have one generator source for each bit that don't care about how the bit was set previously. It is a bit tricky to create that in a computer in a way that also allows for fast access to a large number of random numbers and prevent others from reading the same random number as you do.
For computers it's more a question of "good enough" to make prediction an unfeasible attack vector.
Re: (Score:2)
Because whilst they may have had the time and resources to research it and come up with a solution, they don't necessarily have the time and resources to fight their patch past Commander Torvald's ego and army of protective zealots.
Many bright academics simply have better things to do than get involved in that sort of childishness. Research is what they enjoy and they have neither the time nor interest in political bickering to get involved, especially if it flies in the face of logic - i.e. someone refusin
Re: (Score:2)
Because whilst they may have had the time and resources to research it and come up with a solution, they don't necessarily have the time and resources to fight their patch past Commander Torvald's ego and army of protective zealots.
Or maybe now they have actually created a paper that may or may not define a real issue (I'll wait and see how this is peer reviewed as I have no idea about theoretical weaknesses created by different ways of combining sources of entropy in order to create random data) with the kernel in depth (not just some moronic bullshit petition based on a completely incorrect assessment of how someone guessed something would work with no real investigation of the actual system involved) the kernel team will actually l
Re: (Score:2)
You're just focussing on one incident though, and there have been hundreds over the years. Often Linus is right and is just dealing with an idiot, but sometimes he's not.
But I'm simply reasoning a response to the GP's post as to why people may wish to steer clear of contributing a fix once they've written a paper. If a clique is unnecessarily cruel to anyone, no matter how stupid that person, then that's going to put people off interacting with it directly. As far back as the original Tanenbaum debate he's
I knew the day would come (Score:3)
From a practical side, we also give a precise assessment of the security of the two Linux PRNGs, /dev/random and /dev/urandom
The internet must have finally run out of porn.
Re: (Score:2)
Re: (Score:2)
Penises are anything but random on chatroulette.
Incorrect and irresponsible headline (Score:5, Interesting)
I swear, if I worked for the NSA I'd be pushing out headlines like this to make people ignore real security issues...
The article is a highly academic piece that analyzes the security of the linux rng against a bizarre and probably pointless criteria: What is an attacker's ability to predict the future output of the RNG assuming he knows the entire state of your memory at arbitrary attacker selected points in time and can add inputs to the RNG. Their analysis that the linux rng is insecure under this (rather contrived) model rests on an _incorrect_ assumption that Linux stops adding to the entropy pool when the estimator concludes that the entropy pool is full. Instead they offer the laughable suggestion of using AES in counter mode as a "provably secure" alternative.
(presumably they couldn't get a paper published that said "don't stop adding entropy just because you think the pool is at maximum entropy", either because it was too obviously good a solution or because their reviewers might have noticed that Linux already did that)
Re: (Score:2)
Re:Incorrect and irresponsible headline (Score:5, Informative)
Their analysis that the linux rng is insecure under this (rather contrived) model rests on an _incorrect_ assumption that Linux stops adding to the entropy pool when the estimator concludes that the entropy pool is full.
Exactly. The maintainer of the /dev/random driver explained this and a lot more about this paper here [ycombinator.com].
Re: (Score:3)
Yeah, because random number generators is like pr0n for the masses, and will distract them from boring stuff like the government monitoring and storing that really stupid thing they said about their boss, right next to their cybersex session with their mistress.
Re: (Score:2)
[wtfamireading.jpg]
Whatever happened to (Score:5, Interesting)
zener diodes biased into avalanche mode to generate random noise? I don't think even the NSA has figured out how to hack laws of thermodynamics.
Re: (Score:2)
Somebody's gotta implement it in hardware. Do you trust Intel or AMD? I don't. If I can run an OSS analyzer on it and the results come out positive, I might be convinced. But I'm not sure this feature even exists for any consumer chips.
Re:Whatever happened to (Score:5, Interesting)
Well, Intel and VIA have such things integrated into their processors now. Unfortunately, they (at least Intel - not sure how VIA's implementation worked) decided to whiten the data in firmware - you run a certain instruction that gives you a "random" number, instead of just polling the diode. With all the current furor over the NSA stuff, many people are claiming that it *is* hacked.
Re: (Score:2)
Without whitening you're likely to get biased output, which is undesirable in most of the contexts where you'd want a truly random number in the first place. You can do whitening in software, which is fine for things like seeding a PRNG where you've already got a lot of code related to managing incoming entropy, but it makes the instruction difficult to use in other contexts where you just want a random number for direct consumption. On the other hand, pre-whitened data is just as useful for PRNG seeds *and
Re: (Score:2)
Even if they didn't whiten the data you could never trust that they didn't bias the measurement somehow. Besides which TFA isn't questioning the sources of entropy, of which there are many in a typical Linux system, it is the way that they are combined.
Re: (Score:3)
Re: (Score:3, Informative)
Re:Whatever happened to (Score:5, Insightful)
avalanche diodes conduct bursts of current at random times. A true random number generator simply measures time between those bursts of current then scales that value to whatever numerical range you need.
You can also time the clicks produced by a geiger-mueller tube detecting beta radiation from a radioactive source, but that requires a lot more difficult-to-integrate hardware.
Even if you base the final random number on a truly random source you have to ensure that the scaling routine doesn't introduce any sort of bias into the final value. THAT is the tricky part.
Re: (Score:2)
Use one GM tube for each bit, and let it toggle that bit on or off for each detected decay. To be really sure have different radioactive samples for each GM tube isolated from each other.
If you feel unsure - use two different isotopes, double the number of bits, then XOR them together and use the result as the random number.
Scale up to the number of bits you need.
If that's not random enough I don't know what will be constituted random enough.
To know a god's mind: First become your own god... (Score:2)
Yes, there are apparently laws that approximate how it would work, however to know the outputs one would need to have information about all the energy field states that make up the device's matter (and dark matter) at a given plank time.
In the past when I've needed a random number generator I've used a single infrared LED and IR Diode connected to a serial port. Poll the difference between the time it takes a photon to build up, be emitted, and detected multiple times and use the lowest bits as the random
Linus = on NSA payroll? (Score:2, Interesting)
He "confirmed" he'd been asked to backdoor linux, he never confirmed whether or not he agreed... :)
This is only for recovery after state compromise (Score:5, Informative)
If the CPRNG state is not compromised, the Linux random generators are secure. In fact the property required for robustness is quite strong: Recovery from compromise even if entropy is only added slowly. For example, the OpenSSL generator also does not fulfill the property. Fortuna does, as it is explicitly designed for this scenario.
I also have to say that the paper is not well-written as the authors seem to believe the more complicated formalism used the better. This may also explain why there is no analysis of the practical impact: The authors seem to not understand what "practical" means and why it is important.
Ted Ts'o on Schneier.com (Score:5, Informative)
has some thoughts on the study and the subject:
https://www.schneier.com/blog/archives/2013/10/insecurities_in.html [schneier.com]
Re: (Score:3)
Here's direct link Ted's post [schneier.com]. The most interesting points are (if I understand them correctly):
1. The insecurity discussed in the paper is about how quickly the Linux entropy pool recovers from a compromised state. I.e. imagine that somebody somehow gains full read access to your computer's memory, and reads your randomness pool (but kindly does not read all your private keys etc.), but then loses that access at some later pool. How long does it take until the entropy pool has recovered enough entropy to b
Re:Very Informative. (Score:5, Informative)
If you are gonna do that, might as well link to the comment:
https://www.schneier.com/blog/archives/2013/10/insecurities_in.html#c1909001 [schneier.com]
Random enough? (Score:2)
I think the only question on my mind is what exactly is deemed insecure for? Generating public/private key pairs? Doing encryption for SSL/TLS?
I've been around computers for a good number of years and I know no computer can be truly random, but isn't there a point where we say, "It's random enough."? Is this OP saying.. Linux's RNG isn't "Random Enough." and my question is.. what isn't it random enough for?
haveged? (Score:2)
http://www.issihosts.com/haveged/ [issihosts.com]
Panic! (Score:2, Funny)
I'm having a security panic over here!
as a quick fix I deleted /dev/random and did ln -s /dev/zero /dev/random
So where is the pull request to fix the problem? (Score:2)
Linus is waiting.
Re: (Score:3)
That's not quite the point.
The two random devices are there not to be alternate sources of randomness, but to be a good source of provably random numbers gained from hardware randomness mixed into the entropy pool, and another source of cryptographically random numbers seeded from the first - but with perhaps orders of magnitude less entropy per output bit compared to input bits.
The first source will stop outputting when it runs low on entropy.
Re: (Score:3)
Bad call, in a fight to the death a tie doesn't help you any...
Re: (Score:2)
What? I got modded offtopic for mentioning cufflinks, but Aighearach gets +1 Funny for bringing in neckties?
Where's the justice, /.?
Re: (Score:2)
And 6 6 6 is also in the set of random numbers.
Re: (Score:2)
But there are so many fools out there who would exclude such a sequence from that set. Which would then make that resulting set of random numbers biased.
Yes, but they're only playing the lottery, so what harm does that do?
Re: (Score:2)
If I were writing a malicious RDRAND instruction, it would correlate with the bits being XORed with, in order to reduce entropy.
The whole assumption that a new source of entropy can't reduce randomness is based on the idea that a new sequence is independent of other sequences. It doesn't need to be.
Degenerate case: old sequence = new sequence; output: stream of 0s.