New Encryption Method Fights Reverse Engineering 215
New submitter Dharkfiber sends an article about the Hardened Anti-Reverse Engineering System (HARES), which is an encryption tool for software that doesn't allow the code to be decrypted until the last possible moment before it's executed. The purpose is to make applications as opaque as possible to malicious hackers trying to find vulnerabilities to exploit. It's likely to find work as an anti-piracy tool as well.
To keep reverse engineering tools in the dark, HARES uses a hardware trick that’s possible with Intel and AMD chips called a Translation Lookaside Buffer (or TLB) Split. That TLB Split segregates the portion of a computer’s memory where a program stores its data from the portion where it stores its own code’s instructions. HARES keeps everything in that “instructions” portion of memory encrypted such that it can only be decrypted with a key that resides in the computer’s processor. (That means even sophisticated tricks like a “cold boot attack,” which literally freezes the data in a computer’s RAM, can’t pull the key out of memory.) When a common reverse engineering tool like IDA Pro reads the computer’s memory to find the program’s instructions, that TLB split redirects the reverse engineering tool to the section of memory that’s filled with encrypted, unreadable commands.
Bring it on, folks! (Score:5, Insightful)
The crackers are going to love breaking this in 1, 2, 3 ...
Re:Bring it on, folks! (Score:5, Funny)
Ah, but count-up's are indefinite. Now they won't find it until they count to a million or something. Should have counted down, but now it's too late...
Re:Bring it on, folks! (Score:5, Funny)
So are count-downs. 3, 2, 1, 0, -1...
So are count-tos.
1, 2, 2 and a half, 2 and three quarters...
Re:Bring it on, folks! (Score:5, Interesting)
I did a technological solution similar to this where the TLB split was done in ring -1 (VMX/SMM). Ridiculously easy to decrypt and execute on the fly only as a given page is executed. Really fast. Key exchange happens in ring -1 with an external licensing server. The only way to defeat my mechanism is to get into ring -1 before I did which of course is possible to do. No DRM system is perfectly secure. But this was orders of magnitude more difficult than your average system. If you attached a debugger to the protected process, you literally would see the encrypted opcodes. You could single step and execute as normal but the executable code was always encrypted from the user's perspective because data reads would always return the encrypted code whereas instruction reads would always be decrypted.
The biggest problem I had with this technology actually happened to be the compiler. Some compilers like to mix read-only data into code segments. It wasn't an impossible solution to fix, but it was the biggest headache.
Re:Bring it on, folks! (Score:5, Interesting)
Sounds like all you need to analyze this, is a "fake" processor.
EG, running this inside something like BOCHS, which has a built in x86 debugger, and runs a lot like a hypervisor. This encryption would need to be able to detect living inside a fully emulated system and simply refuse to operate in order to be safe from this kind of analysis. BOCHs will let you step through exactly what instructions the emulated CPU is actually doing, regardless of the data that is stored in the memory allocated to the emulator's process.
Don't get me wrong-- this makes a nasty bump in the road for career data thieves, but forensic analysis of the encryption is not completely thwarted.
Re: (Score:2)
Sounds like all you need to analyze this, is a "fake" processor.
EG, running this inside something like BOCHS, which has a built in x86 debugger, and runs a lot like a hypervisor. This encryption would need to be able to detect living inside a fully emulated system and simply refuse to operate in order to be safe from this kind of analysis. BOCHs will let you step through exactly what instructions the emulated CPU is actually doing, regardless of the data that is stored in the memory allocated to the emulator's process.
Don't get me wrong-- this makes a nasty bump in the road for career data thieves, but forensic analysis of the encryption is not completely thwarted.
Not to mention that it is extremely hard for a program to detect that it is inside a VM like Bochs unless the VM exposes something that can be detected - e.g a BIOS string, hardware signature, etc. Even then, that's easy for a cracker to fix by modifying the VM to have a different string or hardware signature.
Re: (Score:2)
That's actually the opposite of true. Many techniques (http://static.usenix.org/event/woot09/tech/full_papers/paleari.pdf, http://roberto.greyhats.it/pro... [greyhats.it], http://honeynet.asu.edu/morphe... [asu.edu], http://www.symantec.com/avcent... [symantec.com]) exist to identify the presence of a CPU emulator, because these things aren't (and will likely never be) perfect. Most of those techniques don't even rely on timing attacks. Once you introduce timing attacks (*especially* if there's an external source of time information), all bets a
Re: (Score:2)
That's actually the opposite of true. Many techniques (http://static.usenix.org/event/woot09/tech/full_papers/paleari.pdf, http://roberto.greyhats.it/pro... [greyhats.it], http://honeynet.asu.edu/morphe... [asu.edu], http://www.symantec.com/avcent... [symantec.com]) exist to identify the presence of a CPU emulator, because these things aren't (and will likely never be) perfect. Most of those techniques don't even rely on timing attacks. Once you introduce timing attacks (*especially* if there's an external source of time information), all bets are off.
You do realize that Bochs does software emulation of each instruction, and that you can control every aspect of the emulated computer don't you?
If you are running something under Bochs or something like it and don't care about the performance you can actually make it lie to the software underneath about timing so that the software still thinks it is running at the normal rate but in reality it isn't - Bochs after all implements the base system clock not relying on an external source. This is also why Boc
Re: (Score:2)
Just detect if CPU performance is above a certain threshold.... Bochs is slow dead slow as is anything else that emulates adequately enough to make this vector of attach relevant even FPU x86 cpus are at 486 performance levels these days.
Until you adjust the base clock so that the software running under bochs thinks it is running at 1GHz while in reality it may be running at 1 instruction per minute. Yes - it's possible to do since Bochs doesn't rely on host system for any hardware, unless you introduce the qemu module for Linux to bring it to near native speeds but someone employing this kind of reverse engineering wouldn't want to do so.
This is possible because Bochs is a 100% software emulated computer. They emulate everything, and ev
Re: (Score:2)
Yep that trick totally evaded me... I don't doubt that would work fine though. Now.. what about if it had to be connected to the internet to validate the installation at startup >:W And the server had to give it's response in a reasonable amount of time ie 100ms and you couldn't fake it on the PC due to encryption. Now I don't doubt that could be broken but it would be a tad harder at least perhaps ... maybe :D
Until you have someone that has a slow internet connection, f.e sat-com where latencies are typically around 500ms or worse. A poor network, or bad mix of equipment, can make latencies really bad even on an otherwise good network.
For example, my Dell D600 from 2003 had a Broadcom networking chipset in it. The 1GBit interface had a problem with some Cisco routers. The previous routers it was on were 10/100 and it had no issue; but when they upgraded the router to 10/100/1000 the new router had an issue wi
Re: (Score:2)
A friend of mine suggested such an idea to me 15 years ago. This is what I told him.
You run it inside an x86 interpreter which simulates the CPU all the way through the decryption. Then you snapshot the decrypted code and rebuild the app without the decryption and substitute the decrypted code. Now it's just a plain app. If it tries to "detect it's environment" (read the clock, etc), the simulator just tells it whatever it wants to hear. It cannot know it's being simulated, it's just code.
In a sense, that is exactly what Bochs is. It's a true Virtual Machine in that it software interprets every CPU instruction, and emulates every piece of connected hardware - RAM, motherboard, video, network, usb, chipsets, etc - to do so. A truely fine tool for OS and hardware developers.
Re: Bring it on, folks! (Score:5, Interesting)
Well that depends a bit. It would be moderately difficult with VMware since there aren't good facilities to get after what you are looking for. It would be easier on Bochs which has integrated VM debugging. Although with good obfuscation it is still difficult. But as I mentioned above in my post, this kind of a system isn't impossible to break, just more difficult.
Back before DVD drm was generally broken with DeCSS, I had my own mechanism for breaking DVDs It was cumbersome but it worked. I would use a software DVD player to unlock the DVD drive. I had hardware acceleration for the DVD decoding itself. This meant that 90% of the CPU time that was in use was mostly spent doing decryption. So I would attach a debugger, and randomly break in and statistically speaking it wouldn't take me long to catch the decryption function in action. From there it was trivial to find the key and once I had the key I could externally decrypt the DVD.
You could do a similar thing for this kind of a problem. If you are in ring -1, statistically speaking there is a good chance you are running the decryption algorithm. Once you find that, it isn't too hard to trace back to the decrypt init function and extract the key.
This is part of the reason I gave up on (making) DRM stuff. No matter how hard you make it, it's always easy to crack with the right know how and tools. ANYTHING drm and be cracked. Wanna play with my netflix downloader?
Re: Bring it on, folks! (Score:5, Interesting)
Back before DVD drm was generally broken with DeCSS, I had my own mechanism for breaking DVDs It was cumbersome but it worked.
Me too. I electrically emulated a LVDS flat panel and reconstructed the high resolution image from the LVDS.
Works great for BluRay encryption, and for projectionist monitor screens in movie theaters, too, since the flat panels themselves are *after* the content decryption.
Re: (Score:2)
I'm jelly, I always wanted to try this. High end FPGA?
There are commercial emulation chips available, used for testing everything up to the flat panel. You could build your own FPGA if you wanted to, I suppose.
Re: (Score:2)
That was my first thought too on reading this story. They've made it hard to see the program instructions, so just go back a step and reverse engineer the decrypting algorithm.
Re: (Score:2)
But as I mentioned above in my post, this kind of a system isn't impossible to break, just more difficult.
Yes, indeed. Soon after something like this is released, new tools are built to break them.
While I don't doubt the HARES system adds another level of difficulty, anything that can be executed can be decrypted. The very worst that could happen is that it would have to be run through an ICE or software emulator that records the data and instructions as they are executed.
Re: (Score:2)
The crackers are going to love breaking this in 1, 2, 3 ...
Better crackers are going to love breaking this in 3, 2, 1 ... :-)
Re: (Score:2)
Re:Bring it on, folks! (Score:4, Interesting)
I am the author of Loop-Amnesia, a system similar to TRESOR, but more sophisticated in that it supports multiple encrypted volumes. After looking over the article, it does not appear that this is at all similar. It also does not appear to protect against the cold boot attack as claimed.
The authors claim a 2% performance reduction. Such a reduction implies that the instructions are not being decrypted literally on-the-fly; the reduction would be much more severe then. They're using a tactic called a "TLB split", which corrupts the cached page table so that reading memory gets you different results from executing it. A page of executable code is likely decrypted with a key stored in the CPU, put in a different physical page, and then the TLB split is performed so that executes go to the other page while reads still go to the encrypted page.
The cold boot attack dumps physical memory. This tactic corrupts virtual memory to frustrate analysis. The executable code is still stored in RAM somewhere, just not somewhere where you can get to it by reading from a virtual memory address. The cold boot attack would still work fine.
Finally, TRESOR and Loop-Amnesia are not broken. TRESOR-HUNT only works if you enable DMA on your FireWire bus. You shouldn't be doing that anyway.
Re: (Score:3)
I assume you mean PCI Express, since PCI-X is an obsolete standard not used on modern systems, but the answer is the same for PCI, PCI-X, and PCI Express, so no matter.
The TRESOR-HUNT attack works by having the attacker plug a malicious peripheral into the running computer, then having that peripheral use DMA to write malicious code into the computer's RAM which copies the encryption key out of the CPU.
Plugging a PCI card into a computer while it is running is likely to fry the motherboard, or at the very l
Re: (Score:3)
For all practical purposes, you are incorrect. Desktops and laptops do not typically support PCI Express hot swapping; this is a feature implemented only on high-end server chipsets.
Additionally, grow up.
Re: (Score:2)
Heh ... you're lucky. I seated a PCI card in wrong once and it shorted out. Fortunately, it was only $10 or so to replace.
But, you may have a point: it might be possible to electrically tap the PCI or PCI Express bus and do bad things with DMA, even if the bus wasn't built to support hot-swapping. You'd probably need custom hardware, a lot of time, and a lot of luck, though. Also, you'd need to keep power to the CPU on, meaning stuff like chassis intrusion detectors would be a sufficient countermeasure.
Re: (Score:2)
The crackers are going to love breaking this in 1, 2, 3 ...
Odds are the antivirus companies will beat them to it. Else how will they protect against encrypted viruses? Gotta at least maintain the pretense of protection, right?
Already sloved (Score:5, Funny)
I keep my code undeadable with a liberal use of goto [slashdot.org] statements.
Re:Already sloved (Score:5, Funny)
I keep my code undeadable with a liberal use of goto statements.
You made an infinite loop with goto?
Re: (Score:2)
I keep my code undeadable with a liberal use of goto statements.
You made an infinite loop with goto?
Hasn't everyone at some point early in their career?
Re: (Score:2)
LOL, that was me for 5 years on QBasic until I looked at the help file. And then a new world of magical wonder was opened to me...
Re:Already sloved (Score:5, Funny)
Re:Already sloved (Score:4, Funny)
That's a terrible joke. Goto your room!
Re:Already sloved (Score:5, Funny)
Hardware by Intel, code by Escher.
Re: (Score:2)
Re: (Score:2)
(yes, I know who M.C. Escher is... I just prefer to imagine him as a rapper)
You and Seth MacFarlane [youtube.com].
Re:Already sloved (Score:4, Funny)
I keep my code undeadable with a liberal use of goto [slashdot.org] statements.
I store my undeadable code using a Walking Dead technique, whereby the binary code is reaped from the the return statuses of zombie processes killed at the last possible moment ...
Re: (Score:2)
I keep my code undeadable with a liberal use of goto [slashdot.org] statements.
Or simply use a language made out of gotos - Forth. [wikipedia.org]
It'll never be "decrypted."
Re: (Score:2)
Forth is a 'write only' [wikipedia.org] language.
Topic Shift (Score:2)
Still waiting to see some s/w dev in my neighborhood buy a Tesla and get the plate HCF [wikipedia.org]
The keys are in the CPU (Score:1)
I assume by "inside the processor" they mean in the L2 or L3 cache. What is to stop someone from extracting the cached keys and decrypting the entire program? I assume they have some mechanism, but does anyone know what sort of mechanism that would be?
Re: (Score:2)
Seems silly as my CPU runs inside a grounded metal box. [wikipedia.org]
That only helps you until the 30 petahertz processors start shipping and they begin emitting X-rays [wikipedia.org]. Then you'll need a lead vault.
Re: (Score:2)
More of the same: (Score:5, Interesting)
Just another step along the road of "We own your computer, not you."
Re:More of the same: (Score:5, Insightful)
Re: (Score:2)
I admit I haven't looked into it deeply yet, but I suspect it may be able to switch in and out of this mode. Else, you'd have to precompile every thing you run in encrypted form and not be able to use any shared libraries. The binaries would be pretty tubby and performance would suck for the reasons you give.
Run the license checks and some of the key code that's not very compute intensive in the encrypted space, and then shift context to run things you call to do the heavy work in unencrypted space.
Re: (Score:3)
If it works, it's tradeoff of security for speed seems to be worthwhile. That being said, I doubt it works. Time will tell.
Given a number of people here have already mentioned how this could be cracked, I'm not sure that you'll get much, if any security. After all, if you're looking for vulnerabilities, all you need is for those few hackers to care to have a VM that can analyze the cache. Meanwhile, the programs will be running slower for everyone. And Moore's Law is pretty much dead, so you can't just assume that faster and faster processors will make the slowdowns disappear.
Re: (Score:2)
What scares me more is the virus and malware creators getting ahold of this technology. If it does what is being claimed, imagine having to write a defense for malware so encrypted.
You load 16 tons: (Score:2)
"St. Peter don't you call me, 'cause I can't go. I owe my soul to the company store."
They always did own you. ;)
Really? (Score:5, Insightful)
The only time these kinds of tools seem to 'work' is when you are producing something which lacks the popularity to be worth the effort, which is not a good sign.
They work, and fill a need (Score:1)
Vertical markets have severe problems with unauthorized software use (ie: piracy). This will make cracking that software much more difficult.
Re: (Score:2)
Re: (Score:2)
These 'tools' usually just harden a stage that a pirate is probably not going to be needing to
Re:Really? (Score:5, Informative)
We have had many, many technologies that were supposed to stop reverse engineering.
I remember back in the Apple ][ days, a program called "Lock it Up" by Double Gold Software had anti-reverse-engineering things in it, and was advertised as sending the bad guys packing (one of which was doing "poke 214, 128" which would disable the BASIC prompt). Then we had obfuscators for C++, BASIC, Java, and other languages, same thing.
This technology looks like it will be broken by running it in a VM, so I'm sure the next generation will have anti-VM stuff in it, and someone will just run a Bochs emulator (dog slow, but emulates everything 100%) to bypass that.
My take: How about companies spend money on improving their software instead of playing with DRM which will get broken anyway? In the enterprise, the fear of an audit is good enough to keep people in compliance with Oracle licenses. For games, using CD keys is good enough. They can play locally, but can't go multiplayer without a proper key.
If the code is so sensitive it -has- to be protected, put it in a tamper-resistant appliance, like a HSM.
Re:Really? (Score:5, Interesting)
Re: (Score:2)
If the key is burned when the CPU is made, it would be an industry-wide key.
But if the key is burned when the CPU is tested, it can be a CPU-unique key. Is there any evidence they would mask the key, when all modern processors have microcode anyway?
Re: (Score:2)
to have a unique key on every chip, you'd need asymetric crypto, something perhaps like secureboot or TLS that works with certificate chains, that allows for crypographicly verified by unique certificates that contain keys.
That does not exist on die. They'd have to load that into cache
Re: (Score:2)
With hardware support in the CPU this can be done properly.
CPU-unique public/private key pair generated by the manufacturer. Public key signed by manufacturer's private key. To install program, CPU public key is validated, program is encrypted with unique key, unique key is encrypted with CPU public key, program and encrypted key is sent to customer.
CPU would then be givent the execution key, which it decrypts internally with private key and saves securely (no access via JTAG, no instructions to access it
Re: (Score:2)
I remember a computer that operated in that fashion. It was the Texas Instruments TI-99/4A. Had this thing called a "GROM" that decrypted instructions before execution. It didn't make a lot of friends and very rapidly dove to obscurity.
This more recent attempt makes me wonder about 2 things.
First, encryption, like compression, is usually something that's applied to linear sequences. If you literally encrypted the code, then it seems like at a minimum the TLB pages would have to be decrypted as units, since
Well, that's it...better pack up hackers... (Score:5, Funny)
That's it. They've finally come up with uncrackable software. I guess all the hackers will just have to pack their bags and find another hobby now. It was a good many decades while it lasted. But now it's clearly over. Congrats to Jacob Torrey on doing what no one else has ever been able to do! No way this will ever be cracked. He's beaten us all.
Re: (Score:2)
They would be crazy to! Clearly, there is no way to beat it. No one will ever be up to the task of beating this new system.
In 3...2...1... (Score:5, Insightful)
... somebody exploits this to write malware that's truly a bitch to reverse-engineer.
Re:In 3...2...1... (Score:5, Insightful)
and viruses/trojans that are immune to signature-based scanning. Better get serious about process privilege and running stuff in revertable VMs.
Re: (Score:3)
Nothing new: Proper polymorphic computer viruses have existed for a long time (>20 years), partially polymorphic viruses for longer...
http://en.wikipedia.org/wiki/P... [wikipedia.org]
Re: (Score:2)
good point; I hadn't thought of those. So this'll really be an extra layer of obfuscation (though perhaps harder to get around; I'm not sure what approaches exist for analyzing polymorphic viruses, but this is likely to block them).
No JTAG access? (Score:5, Interesting)
Re: (Score:2)
You are wrong. RTFA... they state directly that JTAG debuggers would work on this. The idea is that JTAG debuggers are expensive. Some are.
It's a dumb idea, because some aren't. Some have asserted that you would need one which was, so that you could do it in realtime, but a) I doubt you could do it in realtime and b) there's no reason why stepping through the code wouldn't be enough.
Re: (Score:2)
You are wrong. RTFA... they state directly that JTAG debuggers would work on this. The idea is that JTAG debuggers are expensive. Some are
Which is a terrible defence when you're talking about a single key used for the entire industry (it has to be burned into the CPU, so good luck updating it.)
Re: (Score:2)
I would countersuggest 1999.
Sigh. (Score:5, Insightful)
Another way to crack HARESâ(TM) encryption, says Torrey, would be to take advantage of a debugging feature in some chips... But taking advantage of that feature requires a five-figure-priced JTAG debugger, not a device most reverse engineers tend to have lying around."
Or running the code in a VM.
Really? This sounds just the same as someone saying that DEP would stop this kind of reverse engineering (the concept seems incredibly similar to me, maybe I'm wrong). If someone wants to reverse engineer software, they will have the tools to do so and, in this modern world, any software thats run on physical hardware but not in a VM must have a limited lifespan.
If all else fails, emulate the machine. Slow, yes, but reverse-engineering and debugging tools need to be incredibly slow anyway.
Sorry, but this is a slashvertisement for something with precisely zero deployments in real-life software that people might want to reverse-engineer.
And, as said, all you've done is make it easier to create malware that's difficult to remove. So, in effect, such facilities in processors will end up being beefed up to take account of this and rendering the technique obsolete.
In all of recorded computing history, every technique for preventing reverse-engineering or debugging has turned out not to work, or to be so onerous on users that nobody ever actually enables it.
Intel Already Does This (Score:4, Informative)
Recent Intel processors have the ability to use encrypted RAM and only decrypt it in the CPU's caches. They do it with the SGX instructions. [intel.com]
Re: (Score:2, Insightful)
not released until skylake. can't believe this is +4 informative. should be -1 no research.
https://software.intel.com/en-us/intel-isa-extensions
Does it matter? (Score:5, Insightful)
As long as you can hide to the software you are debugging it, you can step by step through it until it is decrypted. So for all the money, all the added complexity, all you won is only a slight bit more time. The only real copy protection is when part of the code is not run locally but on a different remote machine. For example if you have something on a server which needs to be queried and allow you to continue with the software, like some of the online authorization.
Re: (Score:3)
Weren't there online MMO games that tried that? And someone just made a tool that cached all the content etc. that the local computer received and offered it from a fake local server instead. Not perfect, but surely good enough to defeat even these tactics if you have enough interest in reverse-engineering something.
If the code you want to protect is running on general-purpose processors under the control of a third-party (the user who might want to reverse-engineer), there's nothing you can do to stop th
Re: (Score:3)
Re: (Score:2)
I don't know which country you're talking about, but in the USA, "first sale" is a defense to copyright infringement, and defenses to copyright infringement aren't defenses to circumvention.
Re: (Score:2)
As long as you can hide to the software you are debugging it, you can step by step through it until it is decrypted.
Yep. In fact, you could build a virtual machine that would automate that for you, and collect the decrypted instructions as it runs.
So, as always, "technique to prevent reverse engineering" == "snake oil"...
Re: (Score:2)
you need irreplaceable functions to be running on a server, functions that go to the core of the enjoyment of the game and which cannot be easily reverse engineered to make a simulated version of the server.
unfortunately the more you put on the server the higher your ongoing operation cost is going to be and the more outraged your players will be when you shut down servers and the game stops working, so such a technique will only work for one o
Run under an emulator (Score:2)
Is it emulator safe? Would there be any way to determine which instructions are part to the decryption engine and which are part of the application?
Re: (Score:2)
open-source stuff is not obfuscated, and it is now sufficient for all purposes
Including games? And playback of lawful copies of films published by major movie studios?
What could possibly go wrong? (Score:2)
literally (Score:2)
> That means even sophisticated tricks like a “cold boot attack,” which literally freezes the data in a computer’s RAM
Does the attacker douse the computer in liquid nitrogen, like the T-1000?
Hardware support is required (Score:2)
To do the job properly would take a processor with encryption support baked in, like Sega's Kabuki, or the into the memory controller as in the XBox. Software encryption or obfuscation is nearly useless.
Re: (Score:2)
and of course it either has to not run on a processor without baked in encryption or it's vulnerable to emulation. (Heck, if you can emulate an encrypting processor it's still vulnerable to emulation...)
Fuzzing (Score:3)
Re: (Score:2)
Mu
Wired vs Slashdot? (Score:2)
For some fun, read the comments on TFA, and compare them to the comments here.
Demographic estimations, anyone?
Classic DRM flaw ... (Score:3)
As this, by definition, requires that the encryption key is present in the clear on the machine where the decryption is happening in order to make it possible to decrypt the instructions (CPU cannot execute encrypted code), then it can be trivially circumvented. Finding where the key is stashed is going to be only a matter of time and then the encrypted code can be conveniently decrypted off-line, repackaged without the stupid performance-impeding encryption (caching will suffer badly with it) and released on a torrent somewhere, as always ...
Fundamentally this is not different from doing ROT13 on your code - code obfuscation.
Reverse engineer in a VM (Score:2)
AV products will slaughter this... (Score:2)
AV products will have to kill this dead, because they won't be able to easily detect malware. If it can't be inspected it can't be known to be safe, so I'm going to bet anything using this that isn't whitelisted e.g. by digital signature is going to be DOA.
QEMU processor emulation? (Score:2)
Can't you just emulate the processor with QEMU and run the app in a sandboxed environment ?
https://github.com/hackndev/qe... [github.com]
------
https://stackoverflow.com/ques... [stackoverflow.com]
What's new? (Score:3)
I honestly don't see how this is anything innovative, this is a known artifact of x86 microarchitecture (it isn't an architectural thing though - and it will not work on all x86 processors*). That it could be used for a copy protection scheme is also obvious to anyone with that level of knowledge.
This, together with things like disabling primed data caches (x86 processors will still allow accesses to caches even when disabled under some circumstances) is a trick that is relatively fragile. And it really doesn't buy much extra security given the existence of a good low level emulator.
(* there are x86 processors with a shared I/D TLB, not commonly in use nowadays though, exercise for the reader ;P)
Hahahaha! (Score:2)
Seriously. Are there really people out there so naive that they think this will pose anything more than a minor inconvenience?
Confused? (Score:2)
What they really want is white box cryptography, but it seems computationally impractical right now.
Also, http://en.wikipedia.org/wiki/T... [wikipedia.org] did this and was broken!
There is a much simpler way to do this. (Score:5, Funny)
Venkat!!! Why on God's good name are you passing the reference to a pointer to a function as a construction argument?!?!?! aarggghhhh!
Duplicate the TLB code entries! (Score:3)
To me it looks like this trick has a similar, very simple trick to defeat it:
Assuming you can run some code at kernel (or even SMM) mode, you should be able to scan through all code segments that are marked execute only, and which have a data segment which aliases it? I.e. same virtual address - different physical addresses.
When you find such blocks, you just create new readonly or readwrite mappings which points to the same physical addresses as the decrypted/execute-only memory.
At that point you can dump/debug to your heart's content.
Terje
Defeat with a common debugger (Score:2)
It would be a little work, but by simply observing the changes in the register file step by step, you could make some good guesses at what instruction was executed. That gives you a portion of the decrypted executable code. If you can get a few 16 byte blocks (AES blocksize), then you can reverse the key.
The other issue is that the only modes they could likely use to encrypt the data would be ECB, CTR or XTS. There are many known attacks on those modes when you have leaking cleartext.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
JTAG debuggers are a major problem when you really need to protect your IP. It's enough of a hole that I got NetLogic to add an e-fuse to their XLP network processors (+ later generations) that could disable EJTAG.
Blow the e-fuse during ICT on production hardware and you can cut down on RE capabilities a fair bit.
Doesn't really help for general purpose computers, but a very nice for hardening embedded systems.
Re: (Score:2)
TFA mentions that JTAG can work, but assumes the tools are too pricey for pirates. (Which of course both overestimates the price of the tools and underestimates the resources of a dedicated pirate who expects to be able to actually sell the fruits of his cracking for money...)
Trusted Platform Module (Score:2)
Run it in an emulated CPU
That won't work if it gets the decryption keys from the Trusted Platform Module (TPM).
Prevent reverse engineering the SaaSS way (Score:2)
There is no way to make anti RE tool that is uncrackable on current computer architecture.
Write the software in a managed language to prevent buffer overflows. Instead of distributing it to the public, run it only on a server that you control. This "service as a software substitute" principle is suitable for anything that doesn't need very low latency or very high throughput in a mobile use case.