Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Encryption Piracy Security Software

New Encryption Method Fights Reverse Engineering 215

New submitter Dharkfiber sends an article about the Hardened Anti-Reverse Engineering System (HARES), which is an encryption tool for software that doesn't allow the code to be decrypted until the last possible moment before it's executed. The purpose is to make applications as opaque as possible to malicious hackers trying to find vulnerabilities to exploit. It's likely to find work as an anti-piracy tool as well. To keep reverse engineering tools in the dark, HARES uses a hardware trick that’s possible with Intel and AMD chips called a Translation Lookaside Buffer (or TLB) Split. That TLB Split segregates the portion of a computer’s memory where a program stores its data from the portion where it stores its own code’s instructions. HARES keeps everything in that “instructions” portion of memory encrypted such that it can only be decrypted with a key that resides in the computer’s processor. (That means even sophisticated tricks like a “cold boot attack,” which literally freezes the data in a computer’s RAM, can’t pull the key out of memory.) When a common reverse engineering tool like IDA Pro reads the computer’s memory to find the program’s instructions, that TLB split redirects the reverse engineering tool to the section of memory that’s filled with encrypted, unreadable commands.
This discussion has been archived. No new comments can be posted.

New Encryption Method Fights Reverse Engineering

Comments Filter:
  • by aaaaaaargh! ( 1150173 ) on Friday February 13, 2015 @11:04AM (#49047467)

    The crackers are going to love breaking this in 1, 2, 3 ...

    • by halivar ( 535827 ) <bfelger@gmai l . com> on Friday February 13, 2015 @11:14AM (#49047579)

      Ah, but count-up's are indefinite. Now they won't find it until they count to a million or something. Should have counted down, but now it's too late...

    • by Anonymous Coward on Friday February 13, 2015 @11:19AM (#49047651)

      I did a technological solution similar to this where the TLB split was done in ring -1 (VMX/SMM). Ridiculously easy to decrypt and execute on the fly only as a given page is executed. Really fast. Key exchange happens in ring -1 with an external licensing server. The only way to defeat my mechanism is to get into ring -1 before I did which of course is possible to do. No DRM system is perfectly secure. But this was orders of magnitude more difficult than your average system. If you attached a debugger to the protected process, you literally would see the encrypted opcodes. You could single step and execute as normal but the executable code was always encrypted from the user's perspective because data reads would always return the encrypted code whereas instruction reads would always be decrypted.

      The biggest problem I had with this technology actually happened to be the compiler. Some compilers like to mix read-only data into code segments. It wasn't an impossible solution to fix, but it was the biggest headache.

      • by wierd_w ( 1375923 ) on Friday February 13, 2015 @11:50AM (#49047919)

        Sounds like all you need to analyze this, is a "fake" processor.

        EG, running this inside something like BOCHS, which has a built in x86 debugger, and runs a lot like a hypervisor. This encryption would need to be able to detect living inside a fully emulated system and simply refuse to operate in order to be safe from this kind of analysis. BOCHs will let you step through exactly what instructions the emulated CPU is actually doing, regardless of the data that is stored in the memory allocated to the emulator's process.

        Don't get me wrong-- this makes a nasty bump in the road for career data thieves, but forensic analysis of the encryption is not completely thwarted.

        • Sounds like all you need to analyze this, is a "fake" processor.

          EG, running this inside something like BOCHS, which has a built in x86 debugger, and runs a lot like a hypervisor. This encryption would need to be able to detect living inside a fully emulated system and simply refuse to operate in order to be safe from this kind of analysis. BOCHs will let you step through exactly what instructions the emulated CPU is actually doing, regardless of the data that is stored in the memory allocated to the emulator's process.

          Don't get me wrong-- this makes a nasty bump in the road for career data thieves, but forensic analysis of the encryption is not completely thwarted.

          Not to mention that it is extremely hard for a program to detect that it is inside a VM like Bochs unless the VM exposes something that can be detected - e.g a BIOS string, hardware signature, etc. Even then, that's easy for a cracker to fix by modifying the VM to have a different string or hardware signature.

          • by Zardus ( 464755 )

            That's actually the opposite of true. Many techniques (http://static.usenix.org/event/woot09/tech/full_papers/paleari.pdf, http://roberto.greyhats.it/pro... [greyhats.it], http://honeynet.asu.edu/morphe... [asu.edu], http://www.symantec.com/avcent... [symantec.com]) exist to identify the presence of a CPU emulator, because these things aren't (and will likely never be) perfect. Most of those techniques don't even rely on timing attacks. Once you introduce timing attacks (*especially* if there's an external source of time information), all bets a

            • That's actually the opposite of true. Many techniques (http://static.usenix.org/event/woot09/tech/full_papers/paleari.pdf, http://roberto.greyhats.it/pro... [greyhats.it], http://honeynet.asu.edu/morphe... [asu.edu], http://www.symantec.com/avcent... [symantec.com]) exist to identify the presence of a CPU emulator, because these things aren't (and will likely never be) perfect. Most of those techniques don't even rely on timing attacks. Once you introduce timing attacks (*especially* if there's an external source of time information), all bets are off.

              You do realize that Bochs does software emulation of each instruction, and that you can control every aspect of the emulated computer don't you?

              If you are running something under Bochs or something like it and don't care about the performance you can actually make it lie to the software underneath about timing so that the software still thinks it is running at the normal rate but in reality it isn't - Bochs after all implements the base system clock not relying on an external source. This is also why Boc

    • The crackers are going to love breaking this in 1, 2, 3 ...

      Better crackers are going to love breaking this in 3, 2, 1 ... :-)

    • This is essentially just TRESOR with a different acronym. It's already been broken.
      • by linuxrocks123 ( 905424 ) on Friday February 13, 2015 @05:04PM (#49051091) Homepage Journal

        I am the author of Loop-Amnesia, a system similar to TRESOR, but more sophisticated in that it supports multiple encrypted volumes. After looking over the article, it does not appear that this is at all similar. It also does not appear to protect against the cold boot attack as claimed.

        The authors claim a 2% performance reduction. Such a reduction implies that the instructions are not being decrypted literally on-the-fly; the reduction would be much more severe then. They're using a tactic called a "TLB split", which corrupts the cached page table so that reading memory gets you different results from executing it. A page of executable code is likely decrypted with a key stored in the CPU, put in a different physical page, and then the TLB split is performed so that executes go to the other page while reads still go to the encrypted page.

        The cold boot attack dumps physical memory. This tactic corrupts virtual memory to frustrate analysis. The executable code is still stored in RAM somewhere, just not somewhere where you can get to it by reading from a virtual memory address. The cold boot attack would still work fine.

        Finally, TRESOR and Loop-Amnesia are not broken. TRESOR-HUNT only works if you enable DMA on your FireWire bus. You shouldn't be doing that anyway.

    • The crackers are going to love breaking this in 1, 2, 3 ...

      Odds are the antivirus companies will beat them to it. Else how will they protect against encrypted viruses? Gotta at least maintain the pretense of protection, right?

  • by PPH ( 736903 ) on Friday February 13, 2015 @11:06AM (#49047483)

    I keep my code undeadable with a liberal use of goto [slashdot.org] statements.

  • by Anonymous Coward

    I assume by "inside the processor" they mean in the L2 or L3 cache. What is to stop someone from extracting the cached keys and decrypting the entire program? I assume they have some mechanism, but does anyone know what sort of mechanism that would be?

  • More of the same: (Score:5, Interesting)

    by Hartree ( 191324 ) on Friday February 13, 2015 @11:09AM (#49047511)

    Just another step along the road of "We own your computer, not you."

    • by DocSavage64109 ( 799754 ) on Friday February 13, 2015 @11:15AM (#49047599)
      Assuming this encryption actually works, it probably wreaks major havoc with processor caching and branch prediction algorithms. I'd be interested in seeing benchmarks of this encryption in action vs the non-encrypted version.
      • by Hartree ( 191324 )

        I admit I haven't looked into it deeply yet, but I suspect it may be able to switch in and out of this mode. Else, you'd have to precompile every thing you run in encrypted form and not be able to use any shared libraries. The binaries would be pretty tubby and performance would suck for the reasons you give.

        Run the license checks and some of the key code that's not very compute intensive in the encrypted space, and then shift context to run things you call to do the heavy work in unencrypted space.

    • by faedle ( 114018 )

      What scares me more is the virus and malware creators getting ahold of this technology. If it does what is being claimed, imagine having to write a defense for malware so encrypted.

  • Really? (Score:5, Insightful)

    by jythie ( 914043 ) on Friday February 13, 2015 @11:10AM (#49047527)
    Does anyone in the industry who actually works with computers believe these kinds of claims? Such technologies are great for getting buy in from marketing, the legal dept, underwriters, and content owners, but outside making the life of developers more difficult I have not heard of them actually stopping reverse engineering.

    The only time these kinds of tools seem to 'work' is when you are producing something which lacks the popularity to be worth the effort, which is not a good sign.
    • Vertical markets have severe problems with unauthorized software use (ie: piracy). This will make cracking that software much more difficult.

      • by jythie ( 914043 )
        Oh I agree there is a demand for them, but I question how well they actually fill the need. I have used several, both in house and commercial, including ones that required special hardware to run, and they all got cracked. The only time they helped even a little was for low value products that were not worth the pirate's time.
    • Re:Really? (Score:5, Informative)

      by mlts ( 1038732 ) on Friday February 13, 2015 @11:39AM (#49047823)

      We have had many, many technologies that were supposed to stop reverse engineering.

      I remember back in the Apple ][ days, a program called "Lock it Up" by Double Gold Software had anti-reverse-engineering things in it, and was advertised as sending the bad guys packing (one of which was doing "poke 214, 128" which would disable the BASIC prompt). Then we had obfuscators for C++, BASIC, Java, and other languages, same thing.

      This technology looks like it will be broken by running it in a VM, so I'm sure the next generation will have anti-VM stuff in it, and someone will just run a Bochs emulator (dog slow, but emulates everything 100%) to bypass that.

      My take: How about companies spend money on improving their software instead of playing with DRM which will get broken anyway? In the enterprise, the fear of an audit is good enough to keep people in compliance with Oracle licenses. For games, using CD keys is good enough. They can play locally, but can't go multiplayer without a proper key.

      If the code is so sensitive it -has- to be protected, put it in a tamper-resistant appliance, like a HSM.

    • Re:Really? (Score:5, Interesting)

      by davydagger ( 2566757 ) on Friday February 13, 2015 @11:46AM (#49047887)
      Its possible, but entirely unlikely. There is going to be a massive performance hit. What they can do is encrypt RAM with the key directly in the CPU, something modern computer support.(See unmerged TRESSOR patches). They can then decrypt the data with hardware instructions(AES-NI), as they move main memory to the cache, either on die, on the motherboard which cannot be easily removed.(not within the timespan for coldboot attacks). harder to break does not mean unbreakable as well. If the operating system is rooted, it migh be very well easy enough to get the key from the CPU(AES is symetric, and no CPUs have HW implementations of asymetric ciphers.). If the key is burned when the CPU is made, it would be an industry-wide key. All it would take is one leak, one time, and everyone can now decrypt memory.
      • If the key is burned when the CPU is made, it would be an industry-wide key.

        But if the key is burned when the CPU is tested, it can be a CPU-unique key. Is there any evidence they would mask the key, when all modern processors have microcode anyway?

        • the problem with a CPU-Unique key, is that the same software needs to work with every device. The only crypto on the chip is AES, which is a symetric block cipher. symetric ciphers use a single shared key among recipiants.

          to have a unique key on every chip, you'd need asymetric crypto, something perhaps like secureboot or TLS that works with certificate chains, that allows for crypographicly verified by unique certificates that contain keys.

          That does not exist on die. They'd have to load that into cache

          • by tricorn ( 199664 )

            With hardware support in the CPU this can be done properly.

            CPU-unique public/private key pair generated by the manufacturer. Public key signed by manufacturer's private key. To install program, CPU public key is validated, program is encrypted with unique key, unique key is encrypted with CPU public key, program and encrypted key is sent to customer.

            CPU would then be givent the execution key, which it decrypts internally with private key and saves securely (no access via JTAG, no instructions to access it

    • I remember a computer that operated in that fashion. It was the Texas Instruments TI-99/4A. Had this thing called a "GROM" that decrypted instructions before execution. It didn't make a lot of friends and very rapidly dove to obscurity.

      This more recent attempt makes me wonder about 2 things.

      First, encryption, like compression, is usually something that's applied to linear sequences. If you literally encrypted the code, then it seems like at a minimum the TLB pages would have to be decrypted as units, since

  • by NotDrWho ( 3543773 ) on Friday February 13, 2015 @11:10AM (#49047529)

    That's it. They've finally come up with uncrackable software. I guess all the hackers will just have to pack their bags and find another hobby now. It was a good many decades while it lasted. But now it's clearly over. Congrats to Jacob Torrey on doing what no one else has ever been able to do! No way this will ever be cracked. He's beaten us all.

  • In 3...2...1... (Score:5, Insightful)

    by PvtVoid ( 1252388 ) on Friday February 13, 2015 @11:12AM (#49047547)

    ... somebody exploits this to write malware that's truly a bitch to reverse-engineer.

  • No JTAG access? (Score:5, Interesting)

    by Migraineman ( 632203 ) on Friday February 13, 2015 @11:13AM (#49047569)
    Are the TLB registers not accessible via JTAG?
  • Sigh. (Score:5, Insightful)

    by ledow ( 319597 ) on Friday February 13, 2015 @11:14AM (#49047585) Homepage

    Another way to crack HARESâ(TM) encryption, says Torrey, would be to take advantage of a debugging feature in some chips... But taking advantage of that feature requires a five-figure-priced JTAG debugger, not a device most reverse engineers tend to have lying around."

    Or running the code in a VM.

    Really? This sounds just the same as someone saying that DEP would stop this kind of reverse engineering (the concept seems incredibly similar to me, maybe I'm wrong). If someone wants to reverse engineer software, they will have the tools to do so and, in this modern world, any software thats run on physical hardware but not in a VM must have a limited lifespan.

    If all else fails, emulate the machine. Slow, yes, but reverse-engineering and debugging tools need to be incredibly slow anyway.

    Sorry, but this is a slashvertisement for something with precisely zero deployments in real-life software that people might want to reverse-engineer.

    And, as said, all you've done is make it easier to create malware that's difficult to remove. So, in effect, such facilities in processors will end up being beefed up to take account of this and rendering the technique obsolete.

    In all of recorded computing history, every technique for preventing reverse-engineering or debugging has turned out not to work, or to be so onerous on users that nobody ever actually enables it.

  • by Anonymous Coward on Friday February 13, 2015 @11:16AM (#49047613)

    Recent Intel processors have the ability to use encrypted RAM and only decrypt it in the CPU's caches. They do it with the SGX instructions. [intel.com]

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      not released until skylake. can't believe this is +4 informative. should be -1 no research.
      https://software.intel.com/en-us/intel-isa-extensions

  • Does it matter? (Score:5, Insightful)

    by aepervius ( 535155 ) on Friday February 13, 2015 @11:17AM (#49047621)

    As long as you can hide to the software you are debugging it, you can step by step through it until it is decrypted. So for all the money, all the added complexity, all you won is only a slight bit more time. The only real copy protection is when part of the code is not run locally but on a different remote machine. For example if you have something on a server which needs to be queried and allow you to continue with the software, like some of the online authorization.

    • by ledow ( 319597 )

      Weren't there online MMO games that tried that? And someone just made a tool that cached all the content etc. that the local computer received and offered it from a fake local server instead. Not perfect, but surely good enough to defeat even these tactics if you have enough interest in reverse-engineering something.

      If the code you want to protect is running on general-purpose processors under the control of a third-party (the user who might want to reverse-engineer), there's nothing you can do to stop th

      • That was remote content, not remote execution. Think of it more like taking calculateBulletTrajectory() and offloading it's task to the game server. Sure, you can tap the FIRE button without being connected to the server, but your game will likely crash, since it won't know which way the bullet is heading. I'd have suggested something a little less lag-sensitive than calculating the trajectory of the fast-moving object you just carefully aimed and set into motion, but you and I both know that gaming compani
    • by sribe ( 304414 )

      As long as you can hide to the software you are debugging it, you can step by step through it until it is decrypted.

      Yep. In fact, you could build a virtual machine that would automate that for you, and collect the decrypted instructions as it runs.

      So, as always, "technique to prevent reverse engineering" == "snake oil"...

    • by Lehk228 ( 705449 )
      even that does nothing useful, as a fake server can be set up.

      you need irreplaceable functions to be running on a server, functions that go to the core of the enjoyment of the game and which cannot be easily reverse engineered to make a simulated version of the server.

      unfortunately the more you put on the server the higher your ongoing operation cost is going to be and the more outraged your players will be when you shut down servers and the game stops working, so such a technique will only work for one o
  • Is it emulator safe? Would there be any way to determine which instructions are part to the decryption engine and which are part of the application?

  • Some translation step would be required for an issued binary (e.g an operating system and/or program) to be transformed into locally encrypted "executable" code. Now if there is a mechanism available to a bios and/or an operating system to do this, then ergo, it could be subverted. So why bother?
  • > That means even sophisticated tricks like a “cold boot attack,” which literally freezes the data in a computer’s RAM

    Does the attacker douse the computer in liquid nitrogen, like the T-1000?

  • To do the job properly would take a processor with encryption support baked in, like Sega's Kabuki, or the into the memory controller as in the XBox. Software encryption or obfuscation is nearly useless.

    • by suutar ( 1860506 )

      and of course it either has to not run on a processor without baked in encryption or it's vulnerable to emulation. (Heck, if you can emulate an encrypting processor it's still vulnerable to emulation...)

  • by g0bshiTe ( 596213 ) on Friday February 13, 2015 @11:55AM (#49047965)
    How will this hinder current fuzzing techniques to search for vulnerabilities?
  • For some fun, read the comments on TFA, and compare them to the comments here.

    Demographic estimations, anyone?

  • by janoc ( 699997 ) on Friday February 13, 2015 @12:04PM (#49048065)

    As this, by definition, requires that the encryption key is present in the clear on the machine where the decryption is happening in order to make it possible to decrypt the instructions (CPU cannot execute encrypted code), then it can be trivially circumvented. Finding where the key is stashed is going to be only a matter of time and then the encrypted code can be conveniently decrypted off-line, repackaged without the stupid performance-impeding encryption (caching will suffer badly with it) and released on a torrent somewhere, as always ...

    Fundamentally this is not different from doing ROT13 on your code - code obfuscation.

  • In the article, they mention a JTAG hardware debugger as a possible tool to defeat HARES. Suppose, however, that your reverse engineering environment is a virtual machine, possibly with CPU emulation as well. Couldn't you then do the equivalent debugging in software? What if you write the the VM software specifically to help you out, possibly even just give you the key which the authentic CPU would keep secret?
  • AV products will have to kill this dead, because they won't be able to easily detect malware. If it can't be inspected it can't be known to be safe, so I'm going to bet anything using this that isn't whitelisted e.g. by digital signature is going to be DOA.

  • Can't you just emulate the processor with QEMU and run the app in a sandboxed environment ?

    https://github.com/hackndev/qe... [github.com]

    ------

    https://stackoverflow.com/ques... [stackoverflow.com]

  • by Megol ( 3135005 ) on Friday February 13, 2015 @12:55PM (#49048723)

    I honestly don't see how this is anything innovative, this is a known artifact of x86 microarchitecture (it isn't an architectural thing though - and it will not work on all x86 processors*). That it could be used for a copy protection scheme is also obvious to anyone with that level of knowledge.

    This, together with things like disabling primed data caches (x86 processors will still allow accesses to caches even when disabled under some circumstances) is a trick that is relatively fragile. And it really doesn't buy much extra security given the existence of a good low level emulator.

    (* there are x86 processors with a shared I/D TLB, not commonly in use nowadays though, exercise for the reader ;P)

  • Seriously. Are there really people out there so naive that they think this will pose anything more than a minor inconvenience?

  • What they really want is white box cryptography, but it seems computationally impractical right now.

    Also, http://en.wikipedia.org/wiki/T... [wikipedia.org] did this and was broken!

  • by 140Mandak262Jamuna ( 970587 ) on Friday February 13, 2015 @01:43PM (#49049249) Journal
    Hire a few Indian H1-Bs supplied by Wipro or Infosys or Cogniscent. The code they write is impossible for anyone to decode even if you discolose the the source code and the design specs.

    Venkat!!! Why on God's good name are you passing the reference to a pointer to a function as a construction argument?!?!?! aarggghhhh!

  • by Terje Mathisen ( 128806 ) on Friday February 13, 2015 @02:02PM (#49049485)

    To me it looks like this trick has a similar, very simple trick to defeat it:

    Assuming you can run some code at kernel (or even SMM) mode, you should be able to scan through all code segments that are marked execute only, and which have a data segment which aliases it? I.e. same virtual address - different physical addresses.

    When you find such blocks, you just create new readonly or readwrite mappings which points to the same physical addresses as the decrypted/execute-only memory.

    At that point you can dump/debug to your heart's content.

    Terje

  • It would be a little work, but by simply observing the changes in the register file step by step, you could make some good guesses at what instruction was executed. That gives you a portion of the decrypted executable code. If you can get a few 16 byte blocks (AES blocksize), then you can reverse the key.

    The other issue is that the only modes they could likely use to encrypt the data would be ECB, CTR or XTS. There are many known attacks on those modes when you have leaking cleartext.

  • Comment removed based on user account deletion

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...