Cold Reboot Attacks on Disk Encryption 398
jcrouthamel writes "Contrary to popular assumption, DRAMs used in most modern computers retain their contents for seconds to minutes after power is lost, even at operating temperatures and even if removed from a motherboard. Although DRAMs become less reliable when they are not refreshed, they are not immediately erased, and their contents persist sufficiently for malicious (or forensic) acquisition of usable full-system memory images. We show that this phenomenon limits the ability of an operating system to protect cryptographic key material from an attacker with physical access. We use cold reboots to mount attacks on popular disk encryption systems — BitLocker, FileVault, dm-crypt, and TrueCrypt — using no special devices or materials. We experimentally characterize the extent and predictability of memory remanence and report that remanence times can be increased dramatically with simple techniques. We offer new algorithms for finding cryptographic keys in memory images and for correcting errors caused by bit decay. Though we discuss several strategies for partially mitigating these risks, we know of no simple remedy that would eliminate them."
Clear the DRAM? (Score:5, Interesting)
Re:Clear the DRAM? (Score:5, Interesting)
Can't remember the name of 'em, though.
Re:only useful if you start off unencrypted (Score:2, Interesting)
The method strikes me as the best way to get past TPM devices, until they include measures to zero all RAM on shutdown.
Complete Bullshit (Score:1, Interesting)
We've said it time and time again. If the bad guys can get physical access to a machine (which it seems they would need here), all bets are off, and there's nothing you can do about it. Period. The trick is preventing physical access.
Firewire and USB can access memory (Score:5, Interesting)
I wrote a small paper here http://www.friendsglobal.com/papers/FireWire%20Memory%20Dump%20of%20Windows%20XP.pdf [friendsglobal.com] for a forensics class on using firewire to access memory, subverting the operating system.
All bets are off once physical access is gained. Best bet would be to store the keys, somehow, in the CPU's caches and never let it stay in main memory.
Re:Physical Access (Score:3, Interesting)
3) You've entered your crypto keys
These two are likely true for every kind of whole-drive encryption, unless the encrypted drive is unmounted every time you walk away. As for 1), it's true if you lock the console and walk away from the computer, which quite a lot of people do.
Best workaround would be for A) operating systems to support fixing keys in a single spot in memory and B) drive encryption systems to automatically unmount and scramble the key (in the same place its always been, rather than wherever the OS felt it should be copied to on write) after X minutes of inactivity (prompting the user for the key again when they want to use it again).
Very real concern (Score:5, Interesting)
However, for grins one day, I decided to run "dd if=/dev/mem bs=1m count=[mem size] | strings | grep [whatever]" and found not only various passwords, but URLs for sites visited *weeks* ago, even after reboots. So, I installed the "secure_delete" port and ran "smem". No luck -- some stuff got wiped, but some remained in memory. So I booted to a memtest86 CD-ROM, and ran the full test (this test does all kinds of writes/reads to memory). Then, I booted *back* to the normal system, and I was *still* able to recover juicy bits from /dev/mem. WTF?
We need a kernel module for the common OSes that can encrypt virtual pages (is that the right term?) so that whether in core or paged, they won't be vulnerable.
Countermeasure here: (Score:5, Interesting)
It's not possible to "clear the DRAM" (as others have suggested), because the attacker will boot his own CD and not give control to your OS after the reset. Thus you won't be able to clear anything.
Anything? Not so quick, my dear! For the CD to boot, first there is the BIOS. And BIOS needs memory as well (for the menus, the screen, the ElToro floppy image etc).
Now the countermeasure is obvious: Keep the sensitive key material in memory areas that is erased during the early boot procedure. Then the attack complexity is raised from "no hardware required" to "specicialists hardware necessary, no guarantees given".
It might seem difficult to find out which memory is of that category. But it isn't, either! Just prepare two boot CDs. One that fills all memory with a known pattern (eg 0x55). Boot it. Then reset and insert the second CD, which identifies all memory areas that have lost the known pattern. These areas have either suffered DRAM fade, or they have been overwritten during the BIOS boot process. Use heuristics to find out which of the two was the cause. Done!
As simple as that.
Regards,
Marc
Already Screwed (Score:4, Interesting)
And aside from that, couldn't you just encrypt the important parts of your memory and swap as well as your hard drive? Seems like that would defeat this quite handily, and again, if I were doing something so sensitive I'd probably be taking such precautions.
Honestly though, aside from people doing stuff like maybe international or corporate espionage, I can't imagine where any of this would be a problem.
New feature for motherboard venders (Score:2, Interesting)
Then if they manage to break into your super secure datacenter, wheal in their tank of liquid nitrogen and pump your server full of it just so they can steal your RAM chips...it still doesn't get them anything.
(If you read the paper, they talk about how if you cool the chips with liquid nitrogen they keep their contents with power off and removed for 'several hours'...they argue that simply modifying the bios to zero at startup isn't sufficient as they may physically *remove* the ram chips before you have a chance to zero them)
Re:Physical Access (Score:1, Interesting)
Think criminal investigation. I don't believe I have anything illegal on any of my computers, but given the quantity of laws, I don't really know. I encrypt all my drives just to be sure, and to try to protect the other people who use my hardware remotely.
If the police came busting in with a search warrant I could easily see a situation in which the RAM modules could be popped very shortly after the power was turned off. Now I doubt that this technique will filter down to the average police forensics people for a couple of years, but it is a worry.
Re:CPU cache? (Score:3, Interesting)
Re:Clear the DRAM? (Score:3, Interesting)
Re:Clear the DRAM? (Score:3, Interesting)
Re:Clear the DRAM? (Score:5, Interesting)
Fix: install BIOS with a memory test. (Score:4, Interesting)
Easy fix: install a BIOS/boot ROM with a non-bypassable memory test of all memory. This will clear all memory at power-up before reading the boot device.
Put the key in SRAM. (Score:3, Interesting)
Put the key in a small piece of SRAM. When it gets used to encrypt something, be sure the place of its usage gets wiped back off real fast to minimize the chance of it being exposed to the cold attack. Split data up with different keys for different data, so if a key is exposed, only a minimal amount of data is lost. Another option is to double or triple encrypt the data being sure never to have more than one key at a time in DRAM.
So where do we get this SRAM? A CPU register that is not used, and not saved, for any current purpose is one possible place. For a large amount of SRAM, check out your video card buffer.
Re:Clear the DRAM? (Score:2, Interesting)
Kinda funny that that entry says "Death code isn't very useful, but writing it is an interesting hacking challenge on architectures where the instruction set makes it possible"...I guess we have a practical use for it now, huh?
You can probably read the memory via Cardbus! (Score:3, Interesting)
Another attack loop-AES thought about ! (Score:5, Interesting)
This is yet another attack that the developer of loop-AES [sourceforge.net] thought about while typically every other disk encryption tool out there is vulnerable. Loop-AES is the 3rd most popular disk encryption tool in Linux. See the KEYSCRUB=y option in its README file [sourceforge.net]:
I have used loop-AES as a full disk encryption tool on my laptop for 2+ years. I am glad I took the time to carefully research which tool would the most secure before deploying it ! For example even TrueCrypt and dm-crypt are vulnerable to other (arguably minor) security issues that loop-AES is impervious to: http://article.gmane.org/gmane.linux.cryptography/2321 [gmane.org]
Surprisingly, the research paper TFA talks about doesn't even directly mention loop-AES (its name only happens to be in the title of a webpage in the reference section describing a safe suspend/resume setup when using disk encryption).
Re:Hardly the problem (Score:3, Interesting)
Re:Clear the DRAM? (Score:5, Interesting)
Sony IBM Cell (Score:5, Interesting)
The first power word that a toddler learns is "mine!" It's the capstone to a complete working vocabulary: mommy, daddy, more, enough, and mine. My laptop, my hardware, my data, my privacy. The word "mine" has a direct bypass to the neurological circuit "you can't make me", which as adults lingers as a deeply-rooted fascination with rubber-hose cryptography, and bravado propositions such as "if the Feds bust through your windows". Wrong answer.
Let's look at this from Sony's perspective: my media, my hardware, my design, my copyright, my profits. But guess what? They have a small physical access problem. Millions of zit faced kids with access to liquid nitrogen can get their paws inside the PS3.
This is why an entire SPU is locked down on the PS3 for security / DRM purposes. The SPU contains 256K of SRAM which is carefully guarded. The instruction set is synchronous and deterministic to guard against timing attacks. They were aware of power attacks as well. These can be partially mitigated in software for critical routines by executing non-conditional instruction sequences and then discarding the portions of the computation you didn't want. By design, the SPU doesn't dance on the power line the way most modern speculative out-of-order processors do to begin with. You can't use latency effects, because the local SRAM has constant access time. You can't use contention effects because there aren't any below the level of DMA bursts, which are controlled by a companion processor within the SPU. Plus I think it is possible to schedule SPU-SPU and SPU-memory DMA transfers deterministically, if you really need to. None of this was accidental.
The hardest part of the problem is bootstrapping the secure SPU with the security kernel. I've forgotten how they went about it. There must be some kind of decrypt key buried in the Cell hardware which functions during initial code upload during processor initialization.
In the long run it might be an unwinnable battle, but the PS3 certainly has a far better facility to maintain data security in the complete absence of hardware security than your average PC.
Why can't the average hacker Harry wants to enjoy the same security as Sony/IBM, why can't you achieve this? You've already got the PS3 in your living room. Impediment: the secure system init decrypt key is probably burned into the silicon. It's probably a one-way key, so even if you crack the key, you won't be able to encrypt a replacement block of your own code that matches the decrypt key. But let's suppose you break that too. Problem: Sony knows the decrypt key for the SPU initialization sequence. Game over.
Let's suppose you figure out how to physically change the silicon with an initialization decrypt code known only to yourself. Congratulations, you now enjoy the same protection for your secrets that Sony enjoys for "Untraceable". In doing so, you have now upgraded yourself to a sufficiently threatening fish to swim in a tank in Syria, where your nervous system will be similarly reconfigured.
Ew, I feel like I've just written the script for "Adaptation".
Memory is reliably addressed; just wipe it. (Score:5, Interesting)
What's more of a problem is: how to make this timeout+prompt for passphrase thing work with disk-level encyption regardless of whether you're a console or in a GUI, on an otherwise decent OS like unix? I wouldn't trust Windows to implement disk-level encyption safely anyway, so all bets are off there. But unix still has serious issues regarding the simple presentation of a dialog box to the user no matter what part of the system they're looking at, in a reliable and secure way.
Re:Clear the DRAM? (Score:5, Interesting)
Re:Clear the DRAM? (Score:5, Interesting)
It rather depends... (Score:3, Interesting)
Is that absolute security? Well, an outsider couldn't break the encryption. To do so requires the pad. There are no shortcuts. That's pretty absolute. And even with the pad, you need the offset and stepsize. I've been told of systems where these were mechanically random - the one-time pad tapes synchronized themselves remotely at a speed that was effectively non-deterministic, consuming a non-derministic amount of tape to do so, giving you the random offset and step size.
Ok, well is that perfect? Since not even the people with the machines could tell you in advance at what speed or at what point the synchronization would lock, that's pretty damn secure and could not possibly be known in advance. Thus, an attacker cannot have prior knowledge of the key being used. Nobody can, even if they have all of the physical components.
Yes yes yes, it's secure, but is it perfect? It all depends on what you mean by perfect:
It is not "perfect" in the general sense, because if person A can decrypt the message with some information in their posession (A'), then person B can also decrypt the message with that same information (A').
If you have N people in a group, how many of that N must be honest in order to guarantee that information from A to C reaches C even if at least one traitor B exists in that group, where C is not a traitor? The answer is (N/2)+1.
Now, if (N/2)+1 out of N must be honest, can we improve on our encryption system? In order to get the pad to C, we could make a copy and break it down into random-sized fragments (that may or may not have holes), such that you need (N/2)+1 fragments out of the original N in order to rebuild the tape in the correct sequence. Even if everyone else was a traitor and cooperating with each other, they could not reconstruct the key.
Now, to prevent said traitors tampering with the key, each fragment needs to be securely digitally signed. This means that A must have a key that everyone else can test against (and therefore decrypt) but nobody else can derive. Both the intended target and all traitors will thus know which fragments are real, but a traitor will not be able to make a fictional fragment look real.
Is this perfect now? It depends so much on what you mean by perfect. It's now impractical to attack in transit, provided the algorithms themselves do not have weaknesses which push below the assumed security threshold. Provided both source and destination are themselves genuine, AND provided both source and destination have internal compartmentalization such that if/when attacked, the attacker will not be exposed to the encrypted message, the decrypted message or the decryption key, you have an admirable level of security.
But is it perfect? Define perfect. I would consider any system that is so secure that it would never form a part of the attack vector for the forseeable future to be - in any practical sense - perfect. No matter how much you added to it, it would make no difference to what anyone did or how anyone acted. It's perfect in that attackers would spend their resources attacking anything and everything else. Even if it were "perfect" in some abstract, theoretical sense, it would not add ANY additional security to the system. To me, that is perfect in the meaningful sense.
Re:Not only DRAM but SRAM too (Score:3, Interesting)
Re:Very real concern (Score:3, Interesting)
It's pretty easy to tell, though, that if you do my procedure and grep for "example" then see "http://www.example.com" in the result, which hasn't been visited in weeks/days (even after reboots and power-downs), then it's clearly appears to be lingering in RAM.
Someone mentioned browser cache, but that's set to be cleared each time Firefox is loaded. Perhaps file system slack space is the culprit here.
Someone else mentioned the VFS cache, which I don't know enough about to comment. Would such meta data be preserved on disk and end up in memory after a restart of the OS? Certainly, I know that without wiping inodes, that file names can persist on disk. For the curious, this can be seen by deleting a uniquely-named file from a directory, then doing a "dd if=[dir] | strings" which shows strings of current and deleted filenames. (Oddly enough, this worked for UFS, but not for my ZFS volume -- must handle things much differently.)
I do agree that your dump-to-file-then-grep method is much more sane than mine and less prone to false alarms.
In any case, this is a most interesting topic.
Re:Physical Access (Score:2, Interesting)
How about if I hit the reset button instead of the power button? The POST clears the DRAM during the pre-boot sequence.
Re:Firewire and USB can access memory (Score:2, Interesting)
This "cold RAM" idea is also interesting, though; in theory, you could design a floppy you could insert before cold rebooting that would find encryption keys for you, without even having to disassemble the computer. (I doubt it would work very well in practice, though.)
Countermeasure: VIA C3 has built-in SRAM (Score:5, Interesting)
Newever VIA processors also have some hardware AES support available under Linux, which they call "Padlock. So, if they still retain the SRAM feature, then, at that would make a pretty good choice for the little fanless mini-ITX Linux box that receives your email.
Re:I can't believe this hasn't been mentioned... (Score:3, Interesting)
Re:Easy proof of concept: Three lines of code (Score:3, Interesting)
I suspect he is.
I remember on my Apple II, when you first turned it on, the screen was filled with inverse '@' symbols. Then the screen would quickly blank (or be replaced by spaces) and the startup sequence would go.
But if it hadn't been off for very long, not all the memory would go to zero (chr$('\0') is the inverse '@'). So, if you turned your computer back on with in a certain time, you could either see the screen it had when it shut off, or more likely, rows of the screen would be inverse '@'s, and rows would be the previous screen. Sometimes with garbage characters interspersed, depending on how long the computer was off.
I never did investigate how long the non-screen portions of the memory lasted, but the screen-portions (starting at $300, I think) would be gone in ten seconds.