Please create an account to participate in the Slashdot moderation system


Forgot your password?
Businesses Facebook Security

Facebook Acquires Server-Focused Security Startup 18

wiredmikey writes In a move to bolster the security of its massive global server network, Facebook announced on Thursday it was acquiring PrivateCore, a Palo Alto, California-based cybersecurity startup. PrivateCore describes that its vCage software transparently secures data in use with full memory encryption for any application, any data, anywhere on standard x86 servers. "I'm really excited that Facebook has entered into an agreement to acquire PrivateCore," Facebook security chief Joe Sullivan wrote in a post to his own Facebook page. "I believe that PrivateCore's technology and expertise will help support Facebook's mission to help make the world more open and connected, in a secure and trusted way," Sullivan said. "Over time, we plan to deploy PrivateCore's technology directly into the Facebook server stack."
This discussion has been archived. No new comments can be posted.

Facebook Acquires Server-Focused Security Startup

Comments Filter:
  • Re:Keys (Score:4, Interesting)

    by slincolne ( 1111555 ) on Friday August 08, 2014 @12:41AM (#47628059)
    The FAQ posted on their web site makes mention to the Intel TPM chip.
  • by Menacer ( 222952 ) on Friday August 08, 2014 @02:44AM (#47628327)

    The goal of PrivateCore's product was to encrypt everything that's outside of the CPU core using software techniques. So once you've done an attested boot and gotten your crypto keys in order, from that point on anything outside the CPU socket is done in an encrypted manner (except I/O to the network I guess, but definitely hard disk and data going to the DRAM, etc.) Their important selling point here was that you could protect against cold boot attacks, DMA data dumps, data sniffers on the DRAM lines, etc []. They also claim to have a secure hypervisor (preventing cross-VM thievery) because they've stripped it down to its bare bones, but I believe this ended up being a secondary concern.

    Anyway, their goal was to have unencrypted data in the caches [], but encrypt the data before it leaves the chips and goes out to DRAM. Their page is mostly high-level marketing fluff, so if they were claiming to do more than this, I missed it. The hardware for encrypted DRAM accesses exists in specialized platforms (e.g. the XBox 360) but doesn't currently exist in commodity x86 server parts. As such, a friend and I sat down for an evening a while ago and tried to work out how they would do this without a DRAM controller that did the encryption for you.

    Again, their goal is to have decrypted data in the caches, encrypted data in the DRAM. The crypto routines would have to be contained in software. The major difficulty is that the cache does whatever the cache wants, so it's really rather difficult to say "when this data is leaving the cache, call the software crypto routines." There is no good way for the hardware to tell you it's kicking data out of the cache. (There are academic proposals for this kind of information, but nothing currently exists.)

    We thought up of a number of solutions and were able to validate our guesses against their patent submission []. I will gloss over some of the deeper details (such as methods for reverse engineering the cache's replacement policy).

    The shortened version is:
    1) Work on Intel cores that have >=30 MB of L3
    2) Run a tiny hypervisor that fits into some small amount of memory (let's say 10MB)
    3) Mark all data in the system that is not the hypervisor code pages are non-cacheable
    4) The hypervisor also has the crypto routines, so all of these non-cacheable pages can now be software encrypted using the hypervisor's routines. The DRAM-resident data is now encrypted.
    4a) Because these were marked as non-cacheable data, the hypervisor is still resident in the cache (it was never displaced).
    5) Mark some remaining amount of space (let's say 20MB) of physical memory as cacheable. This physical memory currently contains no data at all.
    6) When you want to run a program or an OS, have the hypervisor move that program's starting code into the 20-meg-range (decrypt it along the way) and set its virtual pages to point to that physical memory range
    7) The program can now run because (at least some of its pages) are decrypted. They are also cacheable, so it will hit in the cache
    8) When you try to access code or data that is still encrypted, it will cause a page fault
    9) The hypervisor's page fault handler will get that encrypted data, decrypt it, and put it somewhere in the 20-meg-range
    9a) If the 20 meg page is already full of decrypted data, you will have to re-encrypt some of it and spill it back to DRAM (like paging it out to disk).

    Because you are only touching ~30 megs of physical memory that is marked as cacheable, you will "never" spill decrypted data to the DRAM. Essentially, they built a system that has 30 megs of main memory (that 30 megs is SRAM in the core), and DRAM is treated like disk/swap in a demand-paging system.

    The reason I am convinced this is likely an acquisition-hire

The trouble with the rat-race is that even if you win, you're still a rat. -- Lily Tomlin