Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Encryption

Cold Reboot Attacks on Disk Encryption 398

jcrouthamel writes "Contrary to popular assumption, DRAMs used in most modern computers retain their contents for seconds to minutes after power is lost, even at operating temperatures and even if removed from a motherboard. Although DRAMs become less reliable when they are not refreshed, they are not immediately erased, and their contents persist sufficiently for malicious (or forensic) acquisition of usable full-system memory images. We show that this phenomenon limits the ability of an operating system to protect cryptographic key material from an attacker with physical access. We use cold reboots to mount attacks on popular disk encryption systems — BitLocker, FileVault, dm-crypt, and TrueCrypt — using no special devices or materials. We experimentally characterize the extent and predictability of memory remanence and report that remanence times can be increased dramatically with simple techniques. We offer new algorithms for finding cryptographic keys in memory images and for correcting errors caused by bit decay. Though we discuss several strategies for partially mitigating these risks, we know of no simple remedy that would eliminate them."
This discussion has been archived. No new comments can be posted.

Cold Reboot Attacks on Disk Encryption

Comments Filter:
  • Clear the DRAM? (Score:5, Interesting)

    by the4thdimension ( 1151939 ) on Thursday February 21, 2008 @12:44PM (#22503834) Homepage
    Could probably implement an algorithm at the operating system level that attempts to clear out DRAM except for what is actually needed for the operating system to power down/boot up. I am not sure of the exact logistics but it seems silly to just power down and leave the DRAM however it was, no matter if its instant cleared or take a few minutes.
  • Re:Clear the DRAM? (Score:5, Interesting)

    by KublaiKhan ( 522918 ) on Thursday February 21, 2008 @12:48PM (#22503910) Homepage Journal
    It is possible; I remember reading about a class of programs in the Jargon File that exploited certain machine code instructions to clear out the whole of the machine's memory, including itself.

    Can't remember the name of 'em, though.
  • by wild_berry ( 448019 ) on Thursday February 21, 2008 @12:50PM (#22503940) Journal
    If you've just turned it off, I can spray your RAM with coolant and have it retain its memory for hours. Then I boot from a USB stick or DVD and use a small program to read the contents of your RAM and harvest your keys.

    The method strikes me as the best way to get past TPM devices, until they include measures to zero all RAM on shutdown.
  • Complete Bullshit (Score:1, Interesting)

    by Anonymous Coward on Thursday February 21, 2008 @12:56PM (#22504028)
    This is just an excuse for some company to sell some kind of snake-oil "RAMWiper" product and try to scare rubes and idiot nontechnical CTOs into believing that they need it.

    We've said it time and time again. If the bad guys can get physical access to a machine (which it seems they would need here), all bets are off, and there's nothing you can do about it. Period. The trick is preventing physical access.
  • by Anonymous Coward on Thursday February 21, 2008 @01:04PM (#22504150)
    Heck, with physical access to a running machine, jack into the firewire or USB port and you have clear access to reading and writing all the memory you want.
    I wrote a small paper here http://www.friendsglobal.com/papers/FireWire%20Memory%20Dump%20of%20Windows%20XP.pdf [friendsglobal.com] for a forensics class on using firewire to access memory, subverting the operating system.

    All bets are off once physical access is gained. Best bet would be to store the keys, somehow, in the CPU's caches and never let it stay in main memory.
  • Re:Physical Access (Score:3, Interesting)

    by Qzukk ( 229616 ) on Thursday February 21, 2008 @01:04PM (#22504156) Journal
    2) It is on
    3) You've entered your crypto keys


    These two are likely true for every kind of whole-drive encryption, unless the encrypted drive is unmounted every time you walk away. As for 1), it's true if you lock the console and walk away from the computer, which quite a lot of people do.

    Best workaround would be for A) operating systems to support fixing keys in a single spot in memory and B) drive encryption systems to automatically unmount and scramble the key (in the same place its always been, rather than wherever the OS felt it should be copied to on write) after X minutes of inactivity (prompting the user for the key again when they want to use it again).
  • Very real concern (Score:5, Interesting)

    by Deagol ( 323173 ) on Thursday February 21, 2008 @01:07PM (#22504198) Homepage
    I run my FreeBSD (7.0RC3) system with some geli-encrypted volumes, and one-time encrypted swap and /tmp. Very little data can leak out to non-encrypted space (yeah, /var/tmp is one).

    However, for grins one day, I decided to run "dd if=/dev/mem bs=1m count=[mem size] | strings | grep [whatever]" and found not only various passwords, but URLs for sites visited *weeks* ago, even after reboots. So, I installed the "secure_delete" port and ran "smem". No luck -- some stuff got wiped, but some remained in memory. So I booted to a memtest86 CD-ROM, and ran the full test (this test does all kinds of writes/reads to memory). Then, I booted *back* to the normal system, and I was *still* able to recover juicy bits from /dev/mem. WTF?

    We need a kernel module for the common OSes that can encrypt virtual pages (is that the right term?) so that whether in core or paged, they won't be vulnerable.

  • Countermeasure here: (Score:5, Interesting)

    by jetmarc ( 592741 ) on Thursday February 21, 2008 @01:07PM (#22504210)
    This attack is very powerful.

    It's not possible to "clear the DRAM" (as others have suggested), because the attacker will boot his own CD and not give control to your OS after the reset. Thus you won't be able to clear anything.

    Anything? Not so quick, my dear! For the CD to boot, first there is the BIOS. And BIOS needs memory as well (for the menus, the screen, the ElToro floppy image etc).

    Now the countermeasure is obvious: Keep the sensitive key material in memory areas that is erased during the early boot procedure. Then the attack complexity is raised from "no hardware required" to "specicialists hardware necessary, no guarantees given".

    It might seem difficult to find out which memory is of that category. But it isn't, either! Just prepare two boot CDs. One that fills all memory with a known pattern (eg 0x55). Boot it. Then reset and insert the second CD, which identifies all memory areas that have lost the known pattern. These areas have either suffered DRAM fade, or they have been overwritten during the BIOS boot process. Use heuristics to find out which of the two was the cause. Done!

    As simple as that.

    Regards,
    Marc
  • Already Screwed (Score:4, Interesting)

    by immcintosh ( 1089551 ) <slashdot@ianmcin ... .org minus punct> on Thursday February 21, 2008 @01:11PM (#22504256) Homepage
    If somebody has the kind of access to cut power to your system and then immediately reboot with a malicious thumb drive, they probably have enough access to install something like an inconspicuous hardware keylogger, which I would be much MUCH more worried about than this if you're doing something sensitive enough to warrant it.

    And aside from that, couldn't you just encrypt the important parts of your memory and swap as well as your hard drive? Seems like that would defeat this quite handily, and again, if I were doing something so sensitive I'd probably be taking such precautions.

    Honestly though, aside from people doing stuff like maybe international or corporate espionage, I can't imagine where any of this would be a problem.
  • by seth_hartbecke ( 27500 ) on Thursday February 21, 2008 @01:11PM (#22504258) Homepage
    Full memory encryption. Set a chip on the memory buss, it encrypts/decrypts all the data as it passes between the CPU and RAM chips. At first this would be something like old MMUs before they were built into the CPU itself. They sit on the address bus and add/subtract offsets. This would sit on the data bus and do some simple crypto. Put a capacitor right next to it, first time the chip powers up it selects a random key, when the motherboard looses power the capacitor keeps the chip running long enough for it to overwrite the key that it was internally storing.

    Then if they manage to break into your super secure datacenter, wheal in their tank of liquid nitrogen and pump your server full of it just so they can steal your RAM chips...it still doesn't get them anything.

    (If you read the paper, they talk about how if you cool the chips with liquid nitrogen they keep their contents with power off and removed for 'several hours'...they argue that simply modifying the bios to zero at startup isn't sufficient as they may physically *remove* the ram chips before you have a chance to zero them)
  • Re:Physical Access (Score:1, Interesting)

    by Anonymous Coward on Thursday February 21, 2008 @01:12PM (#22504270)

    Think criminal investigation. I don't believe I have anything illegal on any of my computers, but given the quantity of laws, I don't really know. I encrypt all my drives just to be sure, and to try to protect the other people who use my hardware remotely.

    If the police came busting in with a search warrant I could easily see a situation in which the RAM modules could be popped very shortly after the power was turned off. Now I doubt that this technique will filter down to the average police forensics people for a couple of years, but it is a worry.

  • Re:CPU cache? (Score:3, Interesting)

    by mzs ( 595629 ) on Thursday February 21, 2008 @01:17PM (#22504356)
    VIA had a while back some registers that you cold tweak to lock L1 cache lines. I don't know if modern VIA chips still have this. In the X86 world I do not know of anything similar from Intel or AMD. You might have better luck using unused registers in the CPU (debugging registers) or external chips (chipset UART regs say) but then you really would not be able to use those anymore, but there may be enough legacy registers around not really used anymore to make it work with the penalty of it being very slow.
  • Re:Clear the DRAM? (Score:3, Interesting)

    by fastest fascist ( 1086001 ) on Thursday February 21, 2008 @01:18PM (#22504368)
    Excuse me? If you had access to the system, running and encrypted bits unlocked, why on earth would you turn it off? As I see it, this might be a problem if you get stormed by an adversary and pull the plug in an effort to secure your data.
  • Re:Clear the DRAM? (Score:3, Interesting)

    by sempernoctis ( 1229258 ) on Thursday February 21, 2008 @01:19PM (#22504384)
    ATX power supplies are required to provide power for a certain amount of time after it signals the motherboard that it is shutting down. I forget how long that is, but it is probably long enough to wipe a few encryption keys if you know what part of memory to wipe. The attacker would then have to physically separate the power supply form the motherboard or remove the RAM while the system is running. To do that they would have to have the case open, so you could have sensors for tampering with the case that cause all encrypted data to be dismounted.
  • Re:Clear the DRAM? (Score:5, Interesting)

    by Anonymous Coward on Thursday February 21, 2008 @01:22PM (#22504410)
    Thermite grenades, actually, taped to the case. Classified computers in sensitive areas get turned into slag if the position is overrun. (From my days working for one of the giant US 'aerospace' companies.)
  • by Animats ( 122034 ) on Thursday February 21, 2008 @01:26PM (#22504484) Homepage

    Easy fix: install a BIOS/boot ROM with a non-bypassable memory test of all memory. This will clear all memory at power-up before reading the boot device.

  • Put the key in SRAM. (Score:3, Interesting)

    by Skapare ( 16644 ) on Thursday February 21, 2008 @01:38PM (#22504690) Homepage

    Put the key in a small piece of SRAM. When it gets used to encrypt something, be sure the place of its usage gets wiped back off real fast to minimize the chance of it being exposed to the cold attack. Split data up with different keys for different data, so if a key is exposed, only a minimal amount of data is lost. Another option is to double or triple encrypt the data being sure never to have more than one key at a time in DRAM.

    So where do we get this SRAM? A CPU register that is not used, and not saved, for any current purpose is one possible place. For a large amount of SRAM, check out your video card buffer.

  • Re:Clear the DRAM? (Score:2, Interesting)

    by KublaiKhan ( 522918 ) on Thursday February 21, 2008 @01:50PM (#22504860) Homepage Journal
    Yes, that's it. Thanks.

    Kinda funny that that entry says "Death code isn't very useful, but writing it is an interesting hacking challenge on architectures where the instruction set makes it possible"...I guess we have a practical use for it now, huh?
  • by tamyrlin ( 51 ) on Thursday February 21, 2008 @01:51PM (#22504874) Homepage
    The problem is much worse on laptops actually. Most laptops have some sort of CardBus slot nowadays. This is basically a PCI interface. The idea is to create a CardBus interface which allows you to dump the physical memory of the laptop using DMA. No need to freeze any memory, just insert your custom made card into the computer and wait for it to copy the memory contents to flash memory or something similar. There are a few question marks here though:
    • Is the Cardbus slot always enabled or must the OS enable it? (If the OS has to enable the cardbus slot you can be safe if the OS doesn't probe the slots when the screen is locked.)
    • If the OS enables the Cardbus slot can it stop the device from doing bus mastering before the device has been identified and a driver has been loaded? (If so you only have to mimic a card which the OS has drivers for to work around it though.)
    Creating a cardbus card isn't exactly rocket science. You could probably do it in a weekend with off the shelf components for less than $200 unles Murphy is feeling creative... :)
  • by this great guy ( 922511 ) on Thursday February 21, 2008 @01:54PM (#22504940)

    This is yet another attack that the developer of loop-AES [sourceforge.net] thought about while typically every other disk encryption tool out there is vulnerable. Loop-AES is the 3rd most popular disk encryption tool in Linux. See the KEYSCRUB=y option in its README file [sourceforge.net]:

    If you want to enable AES encryption key scrubbing, specify KEYSCRUB=y on make command line. Loop encryption key scrubbing moves and inverts key bits in kernel RAM so that the thin oxide which forms the storage capacitor dielectric of DRAM cells is not permitted to develop detectable property. For more info, see Peter Gutmann's paper [cypherpunks.to].

    I have used loop-AES as a full disk encryption tool on my laptop for 2+ years. I am glad I took the time to carefully research which tool would the most secure before deploying it ! For example even TrueCrypt and dm-crypt are vulnerable to other (arguably minor) security issues that loop-AES is impervious to: http://article.gmane.org/gmane.linux.cryptography/2321 [gmane.org]

    Surprisingly, the research paper TFA talks about doesn't even directly mention loop-AES (its name only happens to be in the title of a webpage in the reference section describing a safe suspend/resume setup when using disk encryption).

  • by orclevegam ( 940336 ) on Thursday February 21, 2008 @02:21PM (#22505416) Journal
    Hmm... it's kind of assumed that there will at least be a screensaver password enabled that would prevent you from accessing the data directly. On the topic of preventing the new OS from reading the data though, why not store the decryption keys at 0000:7C00. Anyone familiar with how boot loaders work knows that that's the address that the boot loader gets copied into by the BIOS, so if you store your sensitive data there simply booting a new OS would wipe it out.
  • Re:Clear the DRAM? (Score:5, Interesting)

    by orclevegam ( 940336 ) on Thursday February 21, 2008 @02:27PM (#22505486) Journal
    Yes, but the point is, if the system is powered down when it's stolen, or the hard drive is removed from the system, the full disk encryption will still protect the data. This is only a valid attack vector in the highly unlikely occasion that you have access to the powered on system, and even then it's somewhat dubious as to whether you'll get the data you need off of it. As I said though, this is very interesting from an academic standpoint for the simple fact that it is something that hasn't really been thought of. That being the case though, I can already think of one way to prevent this. Simply store your decryption keys at memory offset 0000:7C00, which will ensure that the BIOS copies the boot loader over your keys the next time the system boots (on x86 systems at any rate).
  • Sony IBM Cell (Score:5, Interesting)

    by epine ( 68316 ) on Thursday February 21, 2008 @02:33PM (#22505574)
    I'm surprised no one has mentioned the Cell processor yet. I guess everyone hates it.

    The first power word that a toddler learns is "mine!" It's the capstone to a complete working vocabulary: mommy, daddy, more, enough, and mine. My laptop, my hardware, my data, my privacy. The word "mine" has a direct bypass to the neurological circuit "you can't make me", which as adults lingers as a deeply-rooted fascination with rubber-hose cryptography, and bravado propositions such as "if the Feds bust through your windows". Wrong answer.

    Let's look at this from Sony's perspective: my media, my hardware, my design, my copyright, my profits. But guess what? They have a small physical access problem. Millions of zit faced kids with access to liquid nitrogen can get their paws inside the PS3.

    This is why an entire SPU is locked down on the PS3 for security / DRM purposes. The SPU contains 256K of SRAM which is carefully guarded. The instruction set is synchronous and deterministic to guard against timing attacks. They were aware of power attacks as well. These can be partially mitigated in software for critical routines by executing non-conditional instruction sequences and then discarding the portions of the computation you didn't want. By design, the SPU doesn't dance on the power line the way most modern speculative out-of-order processors do to begin with. You can't use latency effects, because the local SRAM has constant access time. You can't use contention effects because there aren't any below the level of DMA bursts, which are controlled by a companion processor within the SPU. Plus I think it is possible to schedule SPU-SPU and SPU-memory DMA transfers deterministically, if you really need to. None of this was accidental.

    The hardest part of the problem is bootstrapping the secure SPU with the security kernel. I've forgotten how they went about it. There must be some kind of decrypt key buried in the Cell hardware which functions during initial code upload during processor initialization.

    In the long run it might be an unwinnable battle, but the PS3 certainly has a far better facility to maintain data security in the complete absence of hardware security than your average PC.

    Why can't the average hacker Harry wants to enjoy the same security as Sony/IBM, why can't you achieve this? You've already got the PS3 in your living room. Impediment: the secure system init decrypt key is probably burned into the silicon. It's probably a one-way key, so even if you crack the key, you won't be able to encrypt a replacement block of your own code that matches the decrypt key. But let's suppose you break that too. Problem: Sony knows the decrypt key for the SPU initialization sequence. Game over.

    Let's suppose you figure out how to physically change the silicon with an initialization decrypt code known only to yourself. Congratulations, you now enjoy the same protection for your secrets that Sony enjoys for "Untraceable". In doing so, you have now upgraded yourself to a sufficiently threatening fish to swim in a tank in Syria, where your nervous system will be similarly reconfigured.

    Ew, I feel like I've just written the script for "Adaptation".
  • by CarpetShark ( 865376 ) on Thursday February 21, 2008 @02:37PM (#22505624)
    Memory is reliably addressed -- writing to the address you wrote to earlier will change the same physical part of the ram. There are already existing tools that erase passphrases after a certain period without use. All you need to do is make those tools also scrub the addresses used to store it. A simple patch would cover that.

    What's more of a problem is: how to make this timeout+prompt for passphrase thing work with disk-level encyption regardless of whether you're a console or in a GUI, on an otherwise decent OS like unix? I wouldn't trust Windows to implement disk-level encyption safely anyway, so all bets are off there. But unix still has serious issues regarding the simple presentation of a dialog box to the user no matter what part of the system they're looking at, in a reliable and secure way.
  • Re:Clear the DRAM? (Score:5, Interesting)

    by Ollabelle ( 980205 ) on Thursday February 21, 2008 @02:43PM (#22505728)
    Had something similar when I was working for a heavy construction company. Used to take out the old hard drives and take them down to the welding shop. They always enjoyed torching a hole through the spindle and liquefying everything inside the case...
  • Re:Clear the DRAM? (Score:5, Interesting)

    by HTH NE1 ( 675604 ) on Thursday February 21, 2008 @02:53PM (#22505856)
    Just put in firmware a key generator and encrypt everything going on the memory bus with a new key on every start-up. Bonus: also remap the memory address space randomly so that instructions and data are not stored sequentially on the same chip. For pipeline filling to work though you'll want it integrated into the CPU.
  • It rather depends... (Score:3, Interesting)

    by jd ( 1658 ) <imipak@ y a hoo.com> on Thursday February 21, 2008 @02:59PM (#22505944) Homepage Journal
    ...on how you define "absolute". A one-time pad cannot be broken, even if you had a theoretically infinitely fast computer. (It would decrypt to every possible string of the same length with equal probability, with no means of telling which one was the one you wanted.) If you want to add extra security, then introduce a negotiated random offset into the pad and a negotiated random step-size.

    Is that absolute security? Well, an outsider couldn't break the encryption. To do so requires the pad. There are no shortcuts. That's pretty absolute. And even with the pad, you need the offset and stepsize. I've been told of systems where these were mechanically random - the one-time pad tapes synchronized themselves remotely at a speed that was effectively non-deterministic, consuming a non-derministic amount of tape to do so, giving you the random offset and step size.

    Ok, well is that perfect? Since not even the people with the machines could tell you in advance at what speed or at what point the synchronization would lock, that's pretty damn secure and could not possibly be known in advance. Thus, an attacker cannot have prior knowledge of the key being used. Nobody can, even if they have all of the physical components.

    Yes yes yes, it's secure, but is it perfect? It all depends on what you mean by perfect:

    1. It is "perfect" in the specific sense that a total outsider (ie: no prior knowledge) CANNOT decrypt the message, even in infinite time.
    2. It is "perfect" in the specific sense that a third party with the tape but not the synchronization mechanism CANNOT decrypt the message, even in infinite time.

    It is not "perfect" in the general sense, because if person A can decrypt the message with some information in their posession (A'), then person B can also decrypt the message with that same information (A').

    If you have N people in a group, how many of that N must be honest in order to guarantee that information from A to C reaches C even if at least one traitor B exists in that group, where C is not a traitor? The answer is (N/2)+1.

    Now, if (N/2)+1 out of N must be honest, can we improve on our encryption system? In order to get the pad to C, we could make a copy and break it down into random-sized fragments (that may or may not have holes), such that you need (N/2)+1 fragments out of the original N in order to rebuild the tape in the correct sequence. Even if everyone else was a traitor and cooperating with each other, they could not reconstruct the key.

    Now, to prevent said traitors tampering with the key, each fragment needs to be securely digitally signed. This means that A must have a key that everyone else can test against (and therefore decrypt) but nobody else can derive. Both the intended target and all traitors will thus know which fragments are real, but a traitor will not be able to make a fictional fragment look real.

    Is this perfect now? It depends so much on what you mean by perfect. It's now impractical to attack in transit, provided the algorithms themselves do not have weaknesses which push below the assumed security threshold. Provided both source and destination are themselves genuine, AND provided both source and destination have internal compartmentalization such that if/when attacked, the attacker will not be exposed to the encrypted message, the decrypted message or the decryption key, you have an admirable level of security.

    But is it perfect? Define perfect. I would consider any system that is so secure that it would never form a part of the attack vector for the forseeable future to be - in any practical sense - perfect. No matter how much you added to it, it would make no difference to what anyone did or how anyone acted. It's perfect in that attackers would spend their resources attacking anything and everything else. Even if it were "perfect" in some abstract, theoretical sense, it would not add ANY additional security to the system. To me, that is perfect in the meaningful sense.

  • by HTH NE1 ( 675604 ) on Thursday February 21, 2008 @03:02PM (#22506006)
    I used that method to defeat the XOR encryption on an Apple II program. One sample of the decrypted data and the ability to read the encrypted data and it was simple to derive the XOR cypher for decrypting the whole disk without disassembling the code in the patched DOS.
  • Re:Very real concern (Score:3, Interesting)

    by Deagol ( 323173 ) on Thursday February 21, 2008 @03:50PM (#22506708) Homepage
    Indeed -- however, I am not *that* dense. ;)

    It's pretty easy to tell, though, that if you do my procedure and grep for "example" then see "http://www.example.com" in the result, which hasn't been visited in weeks/days (even after reboots and power-downs), then it's clearly appears to be lingering in RAM.

    Someone mentioned browser cache, but that's set to be cleared each time Firefox is loaded. Perhaps file system slack space is the culprit here.

    Someone else mentioned the VFS cache, which I don't know enough about to comment. Would such meta data be preserved on disk and end up in memory after a restart of the OS? Certainly, I know that without wiping inodes, that file names can persist on disk. For the curious, this can be seen by deleting a uniquely-named file from a directory, then doing a "dd if=[dir] | strings" which shows strings of current and deleted filenames. (Oddly enough, this worked for UFS, but not for my ZFS volume -- must handle things much differently.)

    I do agree that your dump-to-file-then-grep method is much more sane than mine and less prone to false alarms.

    In any case, this is a most interesting topic.

  • Re:Physical Access (Score:2, Interesting)

    by fuzznutz ( 789413 ) on Thursday February 21, 2008 @04:08PM (#22506972)

    Don't expect to have time to do anything if the feds bust down your door and want your boxes. Plus, now your machine doesn't even have to be turned off for someone to remove it to a forensic lab: introducing HotPlug.


    How about if I hit the reset button instead of the power button? The POST clears the DRAM during the pre-boot sequence.
  • by Anonymous Coward on Thursday February 21, 2008 @04:23PM (#22507210)
    1. USB doesn't provide a DMA mechanism. This is something only FireWire is capable of (and is part of the reason why FireWire is generally faster than USB in practice.)
    2. This is a well-known design issue with FireWire, and only certain types of controllers are vulnerable.
    Kudos for mentioning another interesting attack vector, but this is hardly an unavoidable one; just secure the FireWire controller, which is done in some designs.

    This "cold RAM" idea is also interesting, though; in theory, you could design a floppy you could insert before cold rebooting that would find encryption keys for you, without even having to disassemble the computer. (I doubt it would work very well in practice, though.)
  • by Adam J. Richter ( 17693 ) on Thursday February 21, 2008 @05:09PM (#22507764)
    I believe that the C3 processor made by VIA and probably other processors in that family allow some of the cache to be configured as SRAM and mapped into physical memory. So, you could store the key in SRAM, which I believe really will lose its data upon power loss, although you may also want to take countermeasures such as those used in loop-AES to avoid detection by physical changes to the chip if key is store for a long time in the same place.

    Newever VIA processors also have some hardware AES support available under Linux, which they call "Padlock. So, if they still retain the SRAM feature, then, at that would make a pretty good choice for the little fanless mini-ITX Linux box that receives your email.
  • by yeremein ( 678037 ) on Thursday February 21, 2008 @07:41PM (#22509464)
    For hard disk encryption, you need to keep the key in memory. Can't ask the user for a password every time you read a sector.
  • by Stanza ( 35421 ) on Friday February 22, 2008 @12:06AM (#22511398) Homepage Journal

    I suspect he is.

    I remember on my Apple II, when you first turned it on, the screen was filled with inverse '@' symbols. Then the screen would quickly blank (or be replaced by spaces) and the startup sequence would go.

    But if it hadn't been off for very long, not all the memory would go to zero (chr$('\0') is the inverse '@'). So, if you turned your computer back on with in a certain time, you could either see the screen it had when it shut off, or more likely, rows of the screen would be inverse '@'s, and rows would be the previous screen. Sometimes with garbage characters interspersed, depending on how long the computer was off.

    I never did investigate how long the non-screen portions of the memory lasted, but the screen-portions (starting at $300, I think) would be gone in ten seconds.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...