Linux Servers' Entropy Pool Too Shallow, Compromising Security 111
The BBC reports that Black Hat presenters Bruce Potter and Sasha Woods described at this year's Black Hat Briefings a security flaw in Linux servers: too few events are feeding the entropy pool from which random numbers are drawn, which leaves the systems "more susceptible to well-known attacks." Unfortunately, [Potter] said, the entropy of the data streams on Linux servers was often very low because the machines were not generating enough raw information for them.
Also, he said, server security software did little to check whether a data stream had high or low entropy.
These pools often ran dry leaving encryption systems struggling to get good seeds for their random number generators, said Mr Potter. This might meant they were easier to guess and more susceptible to a brute force attack because seeds for new numbers were generated far less regularly than was recommended. Update: 08/10 01:05 GMT by T : Please note that Sasha Woods' name was mis-reported as Sasha Moore; that's now been changed in the text above.
Random (Score:5, Funny)
So a random number walks into a bar. The barman says, "I was expecting you"
Re: (Score:2)
My dice rolls are pretty random... https://xkcd.com/221/ [xkcd.com]
Re: (Score:2)
And your references are pretty stale.
cat videos for enthropy (Score:5, Funny)
Server rooms could have cameras filming cats to generate more entropy from.
Re: (Score:1)
Subject: 01
Location: Window sill
Energy State: 0
Re:cat videos for enthropy (Score:4, Insightful)
Cats are too predictable [tumblr.com]
Re:cat videos for enthropy (Score:4, Funny)
Re: (Score:1)
There's always somebody who just has to look.
Re: (Score:3)
SGI once did that with lava lamps. https://en.wikipedia.org/wiki/... [wikipedia.org]
It wasn't a very serious project but I found it interesting and creative.
Re: (Score:2)
There was one with a bunsen burner, too. I think that one generated more entropy, but, of course, it also burned more gas.
SubjectsInCommentsAreStupid (Score:2)
http://www.issihosts.com/haveg... [issihosts.com]
Wouldn't it be also beneficial to enlarge the pool size from the current 1024?
Re: (Score:2, Informative)
I use https://github.com/waywardgeek... [github.com]
Cheap, reasonable bitrate hardware TRNG, for adding entropy. Entirely open source (code and hardware) with plenty of documentation about how it works.
Re: (Score:1)
Of course it is. Unless you're generating crypto keys and need to have a control over available entropy, there is absolutely no reason to use /dev/random instead of /dev/urandom. And you absolutely shouldn't use /dev/random when you don't have to, as it's able to create system-wide problems when used improperly.
Re:Not True (Score:5, Informative)
I wrote a software that used billions of random numbers drawn from /dev/urandom. Not only were they all unique when used as hashes of reasonable specific width in memory, but also when stored for permanent storage in databases over an extended period. Sure, that's not a good idea, but at the time the only solution we had. I think /dev/urandom is pretty good.
When you don't know what you're talking about, it's best to keep your mouth shut so as not to let everyone else know.
1,2,3,4... ad infinitum are ALL unique, but not random.
Re: (Score:3)
That's very rarely true. The typical result of showing ignorance is being corrected. What do you gain by keeping up (false) appearances?
Re: (Score:2, Informative)
You gain something that is very important and valuable in our society: appearances.
Re: (Score:2)
How many possible values are there? If you randomly draw billions our of only quadrillions, you're likely to get a collision or two.
Screw that petty layman random number generation (Score:3)
Time to go pro [youtube.com].
Re: (Score:2)
Unless it is using /dev/urandom of course in which the security will decrease, but it won't block the app.
Which is why you never use /dev/urandom for crypto, and you never use /dev/random for a dice rolling app.
Re: (Score:1)
This is completely wrong. Modern cryptographically secure PRNGs can generate practically unpredictable sequences of as many numbers you'd like from an initial seed of >= 256 bits of entropy. This is actually how modern cryptography works in general: you wouldn't expect a 256-bit AES key to suddenly become insecure and predictable after encrypting a certain amount of data, would you? Why would a CSPRNG be any different?
In fact, both /dev/random and /dev/urandom are CSPRNGs. /dev/random is *not* a true RNG
For once, Potter or so (Score:1)
have something useful to contribute. And not mess up whole eco-systems. ;-)
Re: For once, Potter or so (Score:5, Funny)
Two points for gryffindoor
Virtualization (Score:5, Interesting)
How much of this problem is due to old assumptions about running on real hardware? That is to say the entropy pool is fed from lots of sources most of them system devices. Is this just an unintended consequence of running on cut down virtual hardware platforms?
Not just virtualization (Score:3)
Virtualization is a strong candidate because everything can be so samey but it can happen on real hardware too - imagine a trying to generate randomness on a basic MIPS based home router [lwn.net] with flash based disks, no hardware RNG, typically booting from a fixed extract RAM disk install and doesn't have hardware clock to save time when powered off but makes ssh certs early during its first boot...
Re: Not just virtualization (Score:1)
Linux security devs though that using packet timings was insecure and so removed it as an entropy source some time ago. They took out some other entropy sources they thought could be manipulated too.
The result is that systems are lacking the entropy needed for secure operation.
Seems like they threw the baby out with the bath water to me.
Re:Virtualization (Score:4, Informative)
How much of this problem is due to old assumptions about running on real hardware? That is to say the entropy pool is fed from lots of sources most of them system devices. Is this just an unintended consequence of running on cut down virtual hardware platforms?
The researchers specifically addressed virtualization in the talk, providing different measurements of entropy available on real vs virtual machines. Real machines generate roughly twice as much entropy per unit of time, but both generate 2-3 orders of magnitude less than systems consume. However, as I noted in my other lengthy post, it's not clear that this really matters.
Re: (Score:2)
How much of this problem is due to old assumptions about running on real hardware? That is to say the entropy pool is fed from lots of sources most of them system devices. Is this just an unintended consequence of running on cut down virtual hardware platforms?
The researchers specifically addressed virtualization in the talk, providing different measurements of entropy available on real vs virtual machines. Real machines generate roughly twice as much entropy per unit of time, but both generate 2-3 orders of magnitude less than systems consume. However, as I noted in my other lengthy post, it's not clear that this really matters.
How significant are these corner cases? It seems to me that the problem is critical only after a reboot. The warning is for some software that starts on reboot and that requires some random bytes. Can that software be made to wait two or three minutes to allow entropy to build up? (Systemd allows deferring the autostart of a program by some amount of time after a reboot. SystemD can be used as well in the Virtual system that is restarted at reboot time).
All I understand from the notice is that programm
Re: (Score:2)
Can that software be made to wait two or three minutes to allow entropy to build up?
Even better, the software can be made to watch the entropy pool (there's an ioctl for that) and wait until there's enough. And it can also be made to re-seed periodically while running in order to limit any vulnerability arising from bad estimates early.
When is not enough entropy a problem? (Score:4, Informative)
For the interested: Understanding-And-Managing-Entropy-Usage Whitepaper Black Hat whitepaper [blackhat.com].
So it seems this is the classic problem that (Linux) programmers are told to use /dev/urandom (which never blocks) and some programs are doing so at system startup thus there's the opportunity for there to be "insufficient" randomness because not enough entropy has been gathered at that point [stackoverflow.com] in time. In short: using /dev/urandom is OK but if you are using it for security purposes you should only do it after /dev/random would have stopped blocking for a given amount of data for the first time since system startup (but there's no easy way to determine this on Linux). Or is there? Since the v3.17 kernel there is the getrandom [man7.org] syscall which has the beahviour that if /dev/urandom has never been "initialised" it will block (or can be made to fail right away by using flags). More about the introduction of the Linux getrandom syscall can be read on the always good LWN [lwn.net]. And yes the BSD's had defences against this type situation first :-)
So this is bad for Linux systems that make security related "things" that depend on randomness early in startup but there may be mild mitigations in real life. If the security material is regenerated at a later point after boot there may be enough entropy around. If the the system is rebooted but preserves entropy from the last boot this may be mitigated for random material generated in subsequent boots (so long as the material was generated after the randomness was reseeded). If entropy preservation never takes place then regeneration won't help early boot programs. If the material based on the randomness is never regenerated then again this doesn't help. If you take a VM image and the entropy seed isn't reset then you've stymied yourself as the system believe it has entropy that it really doesn't.
Myths about urandom (Score:5, Informative)
This article, Myths about urandom [2uo.de], explains why it's generally silly to worry about dried-up entropy pools. There are two scenarios where this might be an issue:
1. There is a compromise that allows an attacker to calculate the internal state of the PRNG. /dev/urandom or /dev/random; you are hosed anyway.
1a. That could be because the PRNG is leaking information about its internal state through its output. That would be really bad, but there are no known or suspected attacks.
1b. The server is compromised in some other way. Then it wouldn't matter whether it's
2. There is no 'true' entropy at all, which could happen on a server which does not store its internal state between reboots and which does not manage to gather 512 bits of true entropy-generating interactions between boot time and the first use of /dev/urandom. This would be an issue only in very specific use cases, certainly not as generic as TFA suggests.
Your link explains the problem (Score:2)
This isn't so much about entropy "drying up" a few days after the system has booted - this is more about generating random numbers just after a system has booted and before "enough" entropy was gathered in the first place. From your link:
Not everything is perfect /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.
[...]
Linux's
but also from your link
FreeBSD does the right thing[...]. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again.
[...]
On Linux it isn't too bad, because Linux distributions save some random numbers when booting up the system (but after they have gathered some entropy, since the startup script doesn't run immediately after switching on the machine) into a seed file that is read next time the machine is booting.
[...]
And it doesn't help you the very first time a machine is running, but the Linux distributions usually do the same saving into a seed file when running the installer. So that's mostly okay.
[...]
Virtual machines are the other problem. Because people like to clone them, or rewind them to a previously saved check point, this seed file doesn't help you.
So not great but not (always) a disaster and modern Linux allows programs to counter this if they wish by using getrandom [slashdot.org].
Re: (Score:1)
Why, in the real world, is this so important? I am obviously missing something. Hopefully someone will hit me with a clue stick. I fail to see why this desire for as close to truly random as one can get is actually a valuable thing for, well, pretty much anything. Given the frequency of these posts (with seemingly good replies) I am obviously not getting something.
Re: (Score:3)
Because a lot of security boils down to "I'm thinking of a number between 0 and $something, I bet an attacker can't guess it at a rate better than blind chance".
e.g. a 128 bit encryption key is a number between 0 and 340282366920938463463374607431768211455. With a secure random number generator, an attacker will have to on average test half of those possible keys before he finds the correct one, because he can't know anything that will reduce the space he has to search.
If your random number generator is br
Re: (Score:1)
That makes sense. I knew I had to be missing something. I am not a crypto guy or even a systems hardening guy. Whilst I am a maths geek, I really do not do anything with crypto. Thanks.
Re: (Score:2)
Thanks for posting the link. It's so good, IMO, that I had it in my bookmarks. It points out that the entropy is fed to the CSPRNG.
Disclaimer: I'm not a crypto guy, but I do read a fair amount about it.
One CSPRNG algorithm is to use a CBC cipher in "counting mode". That is, run 1, 2, 3, ... through the CBC and use the output of that. You start with a random key. But, the security of the CSPRNG is based on breaking the CBC (e.g. finding the key) from the output of the CSPRNG/CBC that is the "random" th
Re: (Score:1)
Re: Entropy as a service (Score:1)
That's a really terrible idea, just like outsourcing the generation of your security keys would be.
We've Been Complaining About That For Years (Score:5, Funny)
But I have a solution! A good solution! A GREAT solution! Behold! Yes, a banana! As we all know, bananas are radioactive! So all we need to do is attach a particle detector to our computer and put a bunch of bananas right on top! Boom! Bananarand! You'll just need to remember to change your bananas out every so often as their half-life is very short. After about a week your bananas will decay into fruit fly particles (I'm not a nuclear scientist, I just play one on TV.)
All right fine, if you don't want to use a banana, United Nuclear has some lovely uranium samples for sale at the moment. Pretty sure you get on a list if you actually order one. Possibly if you click on that link. The radioactive Fiestaware they're selling would probably also work. While you're there, check out their selection of EXTREMELY DANGEROUS MAGNETS!
Virtualization and Cloud issue (Score:2)
This is very true now that everything goes towards virtualization and clouds. Systems in the cloud are especially susceptible to low entropy issues.
Companies like Amazon and Google should step up and provide true hardware entropy sources for systems that they host.
And it's a known problem because there was chatter about it for years yet no-one stepped up.
Hopefully exposure at Black Hat will finally get them to do something about it.
Re: Virtualization and Cloud issue (Score:2)
People have already solved that problem eg using havege. I run entropy feeders on all my virtual hosts which helps a lot since all data communications should be encrypted when talking to 'foreign' hosts.
Not a very good summary (Score:5, Informative)
I attended the (very interesting) Black Hat talk, and neither the article nor the /. summary do a very good job of summarizing it.
From memory (I didn't take notes), the key points were:
1. Tracking the entropy pool is a little harder than you might expect, because the kernel itself uses quite a lot. The primary reason is ASLR, but there other ways the kernel uses entropy itself. The kernel is effectively drawing from /dev/urandom at a very high rate, pulling thousands of bits every time it starts a process.
2. /dev/urandom vs /dev/random work a little differently than most people expect. There are actually three entropy pools, a "main" pool, a /dev/random pool and a /dev/urandom pool. Both /dev/random and /dev/urandom use the same PRNG, and both try to maintain 0 bits of entropy in their local pools, drawing a block from the main pool when needed and mixing it into their local pools (keep in mind that a pool is many bytes of data plus an estimate of how much entropy is in that data). /dev/random, obviously, blocks when it runs out of entropy in its local pool and there isn't enough in the main pool to satisfy the request. /dev/urandom works the same way, except that (a) it won't block and (b) it won't draw the main pool below 128 bits. When the main pool drops to 128 bits, /dev/urandom stops pulling from it.
3. The rate of entropy going into the main pool is low, on the order of single-digit bits per second. For some reason Potter and Moore didn't understand, using a daemon to get bits from the CPU HWRNG not only didn't increase the estimated entropy inflow rate, but actually decreased it (I had to step out for a bit around that point in the talk so I missed details).
4. Points 1, 2, and 3 taken together mean that the entropy pool is basically never above 128 bits. The kernel is always drawing on /dev/urandom, typically at a much higher rate (hundreds to thousands of bits per second pulled from urandom vs <10 bits per second going in).
5. OpenSSL seeds its internal CPRNG during startup and then just uses it, potentially forever, without ever reseeding. Worse, when it seeds from /dev/urandom at startup it makes no effort to check whether or not the kernel pool has any entropy to give it. It just gets some bytes and goes. This means that if an apache server starts up when there isn't much entropy in the pool, that very small amount of entropy could get stretched over a very large number of cryptographic operations. (Aside: I do a little work on and with BoringSSL, Google's OpenSSL fork, and it does reseed regularly, and also uses a more trustworthy CPRNG. I highly recommend using BoringSSL rather than OpenSSL.)
6. Android actually does better than desktop/server Linux, producing many more bits per second of entropy, though still far less than are demanded. Potter attributes this to the rich source of randomness available from the cellular radios.
How much any of this matters is hard to say. Entropy estimation is something of a black art at best, or wild ass guess at worst. Also, the kernel is known to be extremely conservative with at least one of its entropy sources: the HW RNG in most CPUs. Because there's concern the NSA may have backdoored those HW RNGs the kernel assumes their output is perfectly predictable, meaning that it provides zero entropy. The kernel mixes in HW bits anyway, because they can't hurt even if they are 100% predictable, and to whatever extent they're unpredictable they help.
In addition, the whole concept of entropy pools of known size which are drawn down and refilled is an extremely conservative one. Modern CPRNGs are able to produce enormous amounts of uniformly-distributed, unpredictable bits from fairly small seeds -- say, 256 bits of entropy. Assuming the kernel ever manages to get even a
Re: (Score:2)
That would only be a problem if the boot scripts start up OpenSSL before seeding urandom. Are there any server distros that do that? At least CentOS 6 does it in rc.sysinit, way before the network-related stuff is started
Re: (Score:3)
That would only be a problem if the boot scripts start up OpenSSL before seeding urandom. Are there any server distros that do that? At least CentOS 6 does it in rc.sysinit, way before the network-related stuff is started.
That only helps if there's some stored entropy to load during boot. There generally is something to load... how much entropy it has is harder to say.
Re: (Score:2)
Well, I'd say that managing entropy for the PRNG is mainly relevant for network-connected servers that need to generate encryption keys all the time. I assume that the timing of incoming network packets counts towards the entropy pool.(*) There should be plenty of network packets to ensure the accumulation of at least a kilobit of entro
Re: (Score:2)
There should be plenty of network packets to ensure the accumulation of at least a kilobit of entropy during the normal course of operation.
Sure, eventually. But what happens later doesn't matter if Apache has already fired up its workers and they've already initialized their OpenSSL CPRNGs. This is why OpenSSL should re-seed periodically -- and why BoringSSL does re-seed periodically. So if you start with the CPRNG in a predictable state, at least it doesn't stay that way.
I can't find any list online of linux entropy sources that are used.
This is one of the other points the researchers noted: There's very little information available on how it all works other than the source code. Of course, the source code is
Re: (Score:3, Informative)
Basically there's no proof /dev/urandom is less safe than /dev/random - there have been no published results concerning it, and as it internally mixes in data from SHA-1 of entropy coming from random events, and known attacks against SHA-1 aren't very good [wikipedia.org], and that's before taking into consideration that we're mixing those hashes in well. There's a pretty good reason all the attacks come from lack of entropy, but as soon as you get a bit - you're set.
That said, if you have a source of random information yo
Re: (Score:2)
Basically there's no proof /dev/urandom is less safe than /dev/random
Yep. As I said, the whole pool entropy measurement concept is extremely conservative in that there are no known theoretical attacks, much less practical ones, that depend on drawing from /dev/urandom when it has low entropy.
Also, there's an ioctl you can use [man7.org] on /dev/random - RNDGETENTCNT - to see how much entropy your system currently has, if you want to play around with it a bit.
Or you can cat /proc/sys/kernel/random/entropy_avail
Note that doing that will start a new process, which consumes entropy, so don't be surprised if it gradually drops over repeated calls.
Re: (Score:2)
That's because the kernel randomizes the memory addresses of all dynamically linked libraries of the 'cat' command every time you do that.
Re: (Score:2)
Re: (Score:2)
Also, the kernel is known to be extremely conservative with at least one of its entropy sources: the HW RNG in most CPUs. Because there's concern the NSA may have backdoored those HW RNGs the kernel assumes their output is perfectly predictable, meaning that it provides zero entropy.
So unless NSA has built in a backdoor and it's NSA trying to hack you, the CPU has plenty entropy. I'm glad they're trying to make it NSA-proof as well, but I would think any security is fucked if you can't trust the hardware to execute the instructions it gets.
Re: (Score:2)
It's not about not trusting the hardware to execute the instructions. If the NSA were to backdoor the RNGs, they wouldn't do it by making the RNGs produce bad output on command... that would be too easy to detect. Instead, they'd just reduce the entropy of the output, all the time, but not so much that it's detectable. For example, perhaps for every 128-bit output, the RNG generates one of 2^40 128-bit blocks. Now, if you use that RNG's output as an AES key, say, then your key is easily brute-forceable by
Re: (Score:2)
Re: (Score:2)
Entropy estimation isn't a black art. Here is how you do it: take SP 800-90B recommended tests - Markov, Compression, Frequency, Collision, and Partial Collection, collect 1 million samples, then take minimum of the individual test results. Or just do Markov. As long as you take care to analyze raw data (not conditioned), you can find our what is your entropy.
That doesn't tell you anything about the entropy of the data. At all. Try this: Take your favorite CPRNG and seed it with all zeros. Then generate all of the samples you need for exhaustive testing and run all of the tests. The completely deterministic output will pass with flying colors.
While you're at it, try this test as well: Take the output of a known quantum random process (thermal noise, etc.). Don't filter or process it in any way, use the raw data. Now run the tests on it. The truly random dat
why is there no hardware entropy generation ?! (Score:1)
Adding entropy generation hardware on a motherboard or even in the CPU would be trivial.
Would it be cryptographically sound ? Probably not, but it's a hell of a lot better than the ad hoc system now in place.
Is there some sort of patent issue preventing this from happening or something ?
Re: (Score:1)
no i'm specifically talking about a generator which is hardware, i.e. voltage noise based.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:1)
yeah- that's pretty much exactly what i was talking about.
fuckin-A that's depressing.
i, for one, do not welcome our backdoor-inserting overlords.
That sounds counter-intuitive (Score:2)
Servers process enormous amounts of data that could increase the entropy pool. Although they can be manipulated to some extent, the timing and content of network packets that reach them can hardly be predicted.
Re: (Score:2)
Networking was once an entropy source but was removed because of easy manipulation.
It doesn't matter if an entropy source is subject to manipulation, so long as you are gathering from the source and XOR'ing, then entropy is never lost, even by adding inputs from a source subject to manipulation.
They should also add Entropy from invoking Intel RNG periodically.
Why... (Score:3)
Re: (Score:1)
VIA is better than Intel and AMD for this (Score:1)
This is FUD (Score:3)
The real issue with Entropy is that developers keep using
Re: (Score:2)
I do this for living. This presentation is FUD and not applicable for 99% of all configurations. Sure, some headless system with a solid state drive will encounter 'at rest' issues if they idle long enough. This why /dev/random design blocks. For that 1% cases you can always mix Intel RdRand, or Freescale SEC sources.
The real issue with Entropy is that developers keep using /dev/urandom, then all bets are off, as you need to guarantee that system always has sufficient entropy.
Your comment inadvertently highlights the problem.
Developers don't use /dev/random because it blocks, and there's almost never any point to waiting. Once the pool has accumulated a few hundred bits of entropy, there's no reason not to use /dev/urandom.
But blindly using /dev/urandom is dangerous because in some (fairly rare) cases the system will not have any entropy.
The solution is to use /dev/urandom, but only after verifying that the pool has some entropy. Ideally, it would be nice to have an API th
Re: (Score:1)
The solution is to use /dev/urandom, but only after verifying that the pool has some entropy. Ideally, it would be nice to have an API that allows you to find out how many total bits of entropy have been gathered by the system, regardless of how many remain in the pool at any given point in time. If the system has managed to accumulate a few hundred bits, just use /dev/urandom and get on with life. If it hasn't, use /dev/random and block.
The solution is to use /dev/urandom, but only after verifying that the pool has some entropy. Ideally, it would be nice to have an API that allows you to find out how many total bits of entropy have been gathered by the system, regardless of how many remain in the pool at any given point in time. If the system has managed to accumulate a few hundred bits, just use /dev/urandom and get on with life. If it hasn't, use /dev/random and block.
You could build what you are asking for by using the new (since v3.17 kernel) getrandom() [lwn.net] syscall. See the part about emulating getentropy for determining if you've ever had up to 256 bits entropy [man7.org] in its man page for implementing your API suggestion...
Re: (Score:2)
Re: (Score:1)
Doesn't it let you essentially let you find out if you've had (since this boot?) up to 256 bits of entropy? You can ask it whether it has had an amount so long as it's less than 256 bits and you can force it to return failure if you ask for an amount it hasn't yet reached. It's not as generic as what you're asking ("tell me how much you've ever had") but it does still sound close to what you're asking for (albeit in a limited 256 bit form).
Re: (Score:2)
Doesn't it let you essentially let you find out if you've had (since this boot?) up to 256 bits of entropy?
No, not in any way I can see.
You can ask it whether it has had an amount so long as it's less than 256 bits and you can force it to return failure if you ask for an amount it hasn't yet reached.
That's not what it says. It says:
By default, getrandom() draws entropy from the /dev/urandom pool.
[...] If the /dev/urandom pool has been initialized, reads of up to 256 bytes will
always return as many bytes as requested and will not be interrupted
by signals
So, it's good that if the call will block unless the pool has been initialized (unless you give it the GRND_NONBLOCK) flag. But "pool has been initialized" doesn't tell you anything about how much entropy has been gathered except that it's non-zero. The bit about 256 byte (note: byte, not bit) reads always returning has nothing to do with entropy available, it's just an assurance that reads up to that size will be fast and atomic with respect to
Re: (Score:1)
You're absolutely right - it only lets you tell whether the pool has been initialised and GRND_NONBLOCK on tells you how much /dev/random could return if it were blocking and nothing about the total amount of entropy gathered.
Re: (Score:2)
You are correct, aside from initial boot, for non-cryptographic tasks
Why can't we have true hardware random generators (Score:1)
I seriously can't believe modern hardware does not have hardware RNGs
Re: (Score:2)
There is one in some x86 CPUs, see there : https://en.wikipedia.org/wiki/... [wikipedia.org]
Save at shutdown, and occasionally grab web data? (Score:1)
So what ever happened to "save the random state at shutdown, and restore the random state at startup"? That, I thought, was standard behavior as of about a decade ago.
If that wasn't enough, then every so often (no more than once a day or so), grab 1K of random bytes from random.org, and add it to the pool at a slow and steady rate (when /dev/random would block, for example); refill that buffer when empty if enough time has passed.
Yes, that basically means 1K bytes, or 8K bits, of entropy only for /dev/rando
Been thinking this for some time (Score:1)
Re: (Score:1)