Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Encryption Security Science

Ultra-low-cost True Randomness 201

Cryptocrat writes "Today I blogged about a new method for secure random sequence generation that is based on physical properties of hardware, but requires only hardware found on most computer systems: from standard PCs to RFID tags." Basically he's powercycling memory and looking at the default state of the bits, which surprisingly (to me anyway) is able to both to fingerprint systems, as well as generate a true random number. There also is a PDF Paper on the subject if you're interested in the concept.
This discussion has been archived. No new comments can be posted.

Ultra-low-cost True Randomness

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Monday September 10, 2007 @11:07AM (#20539003) Journal
    A slightly more expensive but somehow even more random method is to seed the generator against the words and phrases that come out of the mouth of South Carolina's Miss Teen USA [youtube.com].

    But in all seriousness, I wonder how this compares to the Mersenne Twister [wikipedia.org] (Java implementation [gmu.edu] & PDF [hiroshima-u.ac.jp])that I use at home? I am almost sure this new proposed method is more efficient and faster, when will there be (I know, I'm lazy) a universal implementation of it? :)

    Also, this may be a stupid question, but I wonder how one measures the 'randomness' of a generator? Is there a unit that represents randomness? I mean, it would be seemingly impossible to do it using observation of the output so I guess all you can do is discuss how dependent it is on particular prior events and what they are, theoretically. Can you really say that this is 'more random' than another one because you have to know so much more before hand about the particular machine & its fingerprint in order to predict its generated number?
  • by Anonymous Coward on Monday September 10, 2007 @11:24AM (#20539267)
    The best test for randomness is the compressibility of the stream. If the stream is truely random then there will be no coherency and no redundant data, and so be uncompressable.
  • by G4from128k ( 686170 ) on Monday September 10, 2007 @11:25AM (#20539281)
    I wouldn't assume that these fingerprints are as unique or pattern-less as one might hope (a fact discussed in the pdf). All of the RAM chips from a given wafer or given mask may share tendencies toward some patterns of the probability of a 0 or 1. These patterns may appear as correlations between rows and columns of a given chip. Location on the wafer (in the context of nonuniformities of exposure to fab chemicals) might also systematically affect the aggregate probabilities of 0 or 1 or the repeatability of the fingerprint. The quality of these fingerprints to be consistent or random might change from run to run and from manufacturer to manufacturer. Finally, I'd bet that the probabilities vary systematically with temperature -- e.g., the probability of a 1 increases for all bits as the chip's temperature increases.

    This is a very interesting phenomenon, but a lot more data is needed to show that it provides consistent behavior.
  • by kevmatic ( 1133523 ) on Monday September 10, 2007 @11:27AM (#20539319)
    Yeah, it can be measured. There is no unit, though, as its a measure of entropy. So things are more or less random than something else. I imagine randomness studying program assign numbers to it. a random number is just a number; '1' might be randomly selected out of 1 through 6, but its still just 1. But random number sets are considered random if, for every number, the chances of a the number after it being, say 4, are 1 in 10. So if you have a random set and come across a 1, the probability the next number is 1 is 1 in 10. The same is true for 2, 3, 4 and so in. By measuring the probabilities, you can measure how random your string of numbers is. But just because its random doesn't mean its unpredictable. Random (as per my definition above),yet predictable numbers are pseudorandom. An example is a book of random numbers (which UNIVAC used to publish). Each individual digit might be unpredictable, but if you get a group of say 8 numbers, you can find that group in the book and find the numbers before and after it. Thus, its useless for cryptographic keys. A pseudo random number generator (/dev/urandom) uses math formulas to make pseudorandom numbers. The math can be reproduced, and therefore what it spits out can be predicted. REAL random generators, such as this, are considered 'practically' unpredictable. But I still may be able to influence the probabilities of this by, say, blasting the RAM with a can of freezer and influence its start up state. Doing this doesn't make it completely predictable, but could reduce the possibilities in my brute force attack. This isn't new, either. Video game consoles use this for randomness all the time.
  • by nweaver ( 113078 ) on Monday September 10, 2007 @11:29AM (#20539357) Homepage
    the true RNG properties rely on the fact that:

    a: Many of the bits are sorta random, but physically random. So very biased coins, but true randomness.

    b: With the right reduction function, you can turn a LOT (eg, 512 Kb) of cruddy random data to a small amount (128b-512b) of very high quality, well distributed random.

    And the fingerprinting relies on the fact that:

    a: Many other of the bits are physically random, but VERY VERY biased. So map where those are and record them and it is a very good fingerprint. And since it is all silicon process randomness going into that, it is pretty much a physically unclonable function.

    Kevin Fu has some SMART grad students.
  • Read Gleick's Chaos (Score:3, Interesting)

    by Weaselmancer ( 533834 ) on Monday September 10, 2007 @11:42AM (#20539585)

    Also, this may be a stupid question, but I wonder how one measures the 'randomness' of a generator?

    Read James Gleick's Chaos.

    There is a method in that book that describes how they extracted attractors from a stream of data. Here's how it works.

    A team of researchers had a bathtub with a dripping faucet. They tuned the dripping until the drips fell at random intervals. Nothing rhythmic about it. As the drop broke away from the faucet, it was setting up a vibration in the remaining water that would jiggle until the next drop fell. It was highly nonlinear.

    They constructed a phase space where you would look at the time between any two drops. On the other axis was the time between the one previous to that. So on one axis you have the time bewteen drops 1 and 2, and on the other axis between drops 2 and 3.

    It turns out that an attractor would emerge. The times did not scatter around the page randomly, they grouped in clusters. There was an underlying order that this method would expose.

    So - to answer your question, what you could do would be to take your stream of numbers, and examine them in phase space looking at the differences between each data point. If nothing shows up in a two dimensional plot, go for three. Use n1-n2, n2-n3 and n3-n4 on your axis. Add dimensions if you need to beyond that. See what it takes to make your data cluster, if it ever does. The more complex your data is, the more dimensions it will take to visualize that.

  • by rpp3po ( 641313 ) on Monday September 10, 2007 @11:55AM (#20539883)
    The original paper is much better than CmdrTaco's quick conclusions.
    The described method is ONLY for SRAM (statical RAM), no DRAM, no SDRAM. You can find this on RFID chips and in a CPU'S cache, not in RAM. As there is no way to access a CPU's cache uninitialized, I can't see why this should be useful.
    If you have to modify a CPU first, to allow access to it's unitialized caches (think about all the unwanted implications), it's much cheaper to just give it a thermal diode and register to poll (as most modern CPU's already have).
    After all the described method is just another way of collecting thermal noise. As RFID's are custom designs most of the time, also there it would be cheaper to just use a thermal diode.
    The only application for this would be if you had to develop strong crypto for legacy RFID chips.
    Slashdot stories get worse by the day.
  • by ThosLives ( 686517 ) on Monday September 10, 2007 @12:13PM (#20540153) Journal

    There is no unit, though, as its a measure of entropy.

    Eh, well, the unit of entropy is actually "energy per temperature"*, so there are physical units associated with it. Of course, that's physical entropy, and I don't know that it's the same as "information entropy." If they're different, then I blame the folks that overload technical words.

    That said, I always thought "random" simply meant "the next outcome is not predictable based on all previous measurements." Therefore the measure of "random" would be based on probability that the next outcome can be predicted based on the previous measurements. I'd say in this case that "completely nonrandom" would be "the next outcome can always be predicted based on previous measurements" and "completely random" would be "zero probability of predicting the next outcome based on previous measurements."

    In that sense, it's probably not possible for anything to be either completely random or completely nonrandom, because there is always a finite probability of getting a correct guess, and it's probably impossible to distinguish a guess from looking at previous measurements.




    *from dS = dQ/T where S is entropy, Q is energy, and T is temperature (or, better yet, (Boltzmann's constant)*(multiplicity of the system)). I can't remember from Shannon's paper the exact method he used to compute "entropy", but I'm pretty sure it's not "change in energy per unit temperature". Come to think of it, my guess is Shannon's entropy is simply the multiplicity of the system normalized by Boltzmann's constant so the units dissappear (multiplicity doesn't have units). Those crazy non-physical scientists! *grin*

  • by tlhIngan ( 30335 ) <slashdot.worf@net> on Monday September 10, 2007 @12:54PM (#20540781)

    As an embedded engineer, I've encountered numerous cases where power cycling RAM did not alter the contents.

    In fact, I've seen systems boot and run even after the power was cut for several seconds. Some types of SRAM and SDRAM have the ability to retain an (imperfect) memory image even at very low voltage levels. Sure, it's not guaranteed to be accurate by the manufacturer, but RAM "images" are a pretty well known phenomenon. In some cases, the contents of memory can be reconstructed even after the computer has been powered off and removed to a forensic laboratory.

    This is not random at all. In fact, it's more likely to produce an easily exploitable RNG than anything else; I would not be at all surprised if the standard UNIX random number generator provided better security.


    I've had this bite me, and exploited it.

    It bit me when booting into Windows CE - you'd power cycle the thing, and the OS would boot with the old RAM disk you had - we'd gotten to the point where we'd have the bootloader wipe the kernel memory so the data structures were all corrupted by the time the OS was trying to decide between mounting the RAM disk (object store) and starting fresh. It turns out that the longer an image is unchanged in RAM, the more likely the cells woudl be biased such that if you cycle the power on them, they're more likely to lean towards the way they were before power was cut.

    The time I exploited it, I didn't have any way of logging. Logging to serial port caused issues (timing-sensitive code), so I logged to memory (and no, I had no filesystem running, so I couldn't log to file). My trick was to simply log to a circular RAM buffer. When it crashed, I would just power cycle and dump the RAM buffer. Even though the data was fresh, it was enough to make out what my debug message was trying to say (almost always perfect). This was readable after a brief power cycle, and was still readable after turning power off for nearly a minute. The characters got corrupted, but since it was regular ASCII, you could still make out the words.
  • by sdedeo ( 683762 ) on Monday September 10, 2007 @03:02PM (#20542905) Homepage Journal
    Entropy is fascinating. It's proportional to the logarithm of the number of microstates, but until the advent of quantum mechanics, there was not good way to number the microstates of a given physical system. Once you have the uncertainty principle, you can divide the phase space up into little chunks of volume (Planck's constant)^(dimensions) and count it that way.

    Another way to put it is that before the advent of quantum mechanics, every measurement of entropy was only meaningful in a relative, differential sense. S is arbitrary up to a constant. You can see that from the definition you use, which when you integrate is ambiguous up to a constant.

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...