Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security IT

VM-Based Rootkits Proved Easily Detectable 128

paleshadows writes "A year and a half has passed since SubVirt, the first VMM (virtual machine monitor) based rootkit, was introduced (PDF), covered in the tech press, and discussed here. Later Joanna Rutkowska made news by claiming she had a VMM-based attack on Vista that was undetectable — a claim that was roundly challenged. Now in this year's HotOS workshop, researchers from Stanford, CMU, VMware, and XenSource have published a paper titled Compatibility Is Not Transparency: VMM Detection Myths and Realities (PDF) showing that VMM-based rootkits are actually easily detectable."
This discussion has been archived. No new comments can be posted.

VM-Based Rootkits Proved Easily Detectable

Comments Filter:
  • I may be mistaken, but I thought blue pill was similar to a VM, but was actually a hypervisor exploit. It sounds to me like having dedicated root kit support built into the chip via the hypervisor would be different than running an OS image inside a software based virtual machine.
    • Re: (Score:3, Informative)

      Virtual Machine Monitor and Hypervisor are synonyms. Hypervisor however generally implies a monitor running on the bare hardware (type 1 virtual machine) whereas VMM may also refer to a monitor running as a userspace process on a host kernel (type 2 virtual machine). Thus it is correct to call bluepill either a hypervisor or a VMM.

      Generally the term VMM is much more common with implementors of these systems, however hypervisor is easier to say and sounds cooler so its common with users.

      • by kscguru ( 551278 ) on Tuesday October 02, 2007 @11:36AM (#20824771)
        Virtual Machine Monitor and Hypervisor are NOT synonymous - they usually come in the same package, but this is not required.

        An example Virtual Machine Monitor without a Hypervisor is VMware Workstation: a small VMM is loaded to run the guest OS, but it is not complete enough to run the system - it has no task switcher, no memory manager, etc. The host OS acts as the hypervisor here - it is the source of highly-privileged operations unavailable to the guest. Another no-hypervisor VMM is KVM: KVM just runs a virtual machine, but depends on the rest of Linux to run more-privileged operations (and Linux itself becomes the hypervisor).

        An example Hypervisor without a Virtual Machine Monitor is the partitioning software on high-end IBM, Sun, etc. machines, which allows you to physically partition the processors of the system into several actual machines - partitioned machiens with zero run-time interdependencies. Literally, a "hypervisor" is something which runs at a privilege level higher than the "supervisor" (the OS).

        Hypervisors and virtual machine monitors have existed since the 1960s. Nobody confused the terms then. IBM started the confusion with a whitepaper [ibm.com]"inventing" the type 1 / type 2 taxonomy to distinguish between 1960s-modern IBM mainframe architectures (low-end = hypervisor only, high-end = combination hypervisor/vmm) and the VMware Workstation architecture (host OS loads vmm; host OS acts as hypervisor). Note that VMware never claimed Workstation was a hypervisor! Certain communities (Wikipedia, the press) have accepted IBM's whitepaper as gospel truth, thus the prolifieration of "type 1" and "type 2" terms the past several years. (The same community has chosen to ignore academic research in the 1960s and 1996-2005 which used VMM and Hypervisor correctly.)

        With apologies to many individuals who are legitimately using correct terminology, some poorly-informed folks are propagating the "type 2 hypervisor" meme to attempt to equate the abilities of a hypervisor/VMM with a VMM. This is not correct: a combination hypervisor/vmm ALWAYS can achieve better performance than separating the hypervisor and VMM - at the cost of creating a more complex hypervisor (ESX requires custom drivers; Xen requires a customized dom0). The fault for this confusion really rests with Intel: their VT extensions (and AMD's SVM response) have made it so easy to create a VMM that some folks are creating a VMM, then marketing it as a hypervisor in a misguided attempt to compete with existing hypervisors (ESX, Xen) instead of competing with other VMMs (VMware Workstation/Fusion, KVM, Parallels Desktop)

        To understand what a VMM is, read this ACM article [acmqueue.com] by Mendel Rosenblum. Academic research generally looks at VMMs (ways to run a virtual machine), not hypervisors (ways to run something with less privileges than the hypervisor). A rough gage of the quality of academic work is whether they manage say Hypervisor when they mean Virtual Machine Monitor. Anyone who thinks the two are the same is ignorant of the past ten years of academic research - and anyone ignorant of ten years of research is doing very poor-quality work. (Alas, Wikipedia chose to use the IBM whitepaper for defining terms instead of many years of published, peer-reviewed papers. Great "neutrality", folks!)

    • But why is this article tagged Sony? The Sony rootkit was done by a content division of the company like, what, two years ago? And it wasn't even a VM rootkit.

      Or are the sheeple waving their "Never forget" banners loudly?
      • Re: (Score:3, Insightful)

        by Xiph ( 723935 )
        Actually, when people are being aware of how they're mistreated, and protest it loudly (enough for others to notice), I don't think they qualify as being sheeple.
        Well, maybe except those who still buy sony music.

        I stopped buying music-cds altogether when one of them installed crap on my winbox.
        • Re: (Score:2, Interesting)

          by Sloppy ( 14984 )
          [not be an asshole (but I can't help it)...]

          I stopped buying music-cds altogether when one of them installed crap on my winbox.

          How did they install crap on your winbox (are you running a ssh server)? I suspect that you installed that crap, or that your OS' virus-support feature installed it for you as a "convenience." Software, no matter how bad, sitting on a CD doesn't just execute itself. Something or somebody (and it wasn't Sony, because they had not yet compromised your machine) decided, "Let's loa

          • by Xiph ( 723935 )
            They exploited the windows behavior named Autorun.
            They used it to install crap on my computer, when all i knew was word games.

            They placed it there with the intention of this happening.
            That is why i claimed they installed crap on my computer.

            Avoiding the music label, not sure it was Sony, is my way of saying thank you and fuck you too.
            And yes, i will be damned sure to accuse them of doing it, because to the average luser, that's what happens, completely without interaction or information and with every bit o
  • Until there is openness from the processor, bios, user software, and everything else through and through, who knows.
    • Re: (Score:3, Insightful)

      by jimicus ( 737525 )
      Where exactly are you going to buy a complete system with a fully documented processor, BIOS (or equivalent firmware) and all component parts right the way down to the Verilog (or [insert chip design software here]) source files?

      Bearing in mind that even then you need to prove that the chip you hold is the same one described by the source files, and the only way you can guarantee that is if you control the chip fab which produces the chip. Failing that, I suppose you could skim the top off one and examine
      • by evanbd ( 210358 ) on Tuesday October 02, 2007 @05:04AM (#20820991)

        Of course, this basic problem [bell-labs.com] was described quite eloquently by Ken Thompson. He went after the compiler, but the problem of proving that the binary you have matches the source you have is a tricky one no matter what.

        There actually are some very clever solutions to try to catch cheating compilers like this, but none of them are trivial. It's a cat and mouse game, and there are actually proofs that winning either side completely is impossible.

        • by m50d ( 797211 )
          Surely you bootstrap things by hand-assembling a simple nonoptimizing C compiler (nonoptimizing compilers aren't that hard, surely one could be written in assembly), then use that to compile your C compiler (thus getting a binary that matches your source), then use that to recompile itself (so you have a C compiler that can actually run at semi-reasonable speed, and whose binary matches its source assuming you verified its source wasn't doing anything nasty).
          • Re: (Score:3, Interesting)

            by evanbd ( 210358 )

            Now you've simply pushed the problem a level higher, into the assembler / linker. Yes, it helps, but there are other techniques as well.

            The most elegant technique I've seen goes like this. Maintain trusted dead-simple non-optimizing compiler A (possibly on a different machine). You also have untrusted compiler B, and it's alleged source SB. Compile SB with B, resulting in B. Compile SB with A, resulting in B'. These binaries will be different, but should be functionally equivalent -- B' is probably

            • by m50d ( 797211 )
              Now you've simply pushed the problem a level higher, into the assembler / linker.

              No, that's why I said hand-assemble. (And toggle it in on the front panel.) Yes, you still have to verify the hardware, but that ensures noone's messing with the software.

        • yo i just read this,
          http://cm.bell-labs.com/who/ken/trust.html [bell-labs.com]

          the best bit.
          "In college, before video games, we would amuse ourselves by posing programming exercises."

          i think he is one to something here...
  • I read the paper (Score:5, Interesting)

    by suv4x4 ( 956391 ) on Tuesday October 02, 2007 @01:51AM (#20820245)
    I'm still convinced that it's possible to make a VM that appaears to software running within as real hardware.

    The paper, however, takes a practical approach, examining how some industry standard VM-s operate, such as VMWare and Virtual PC.

    Those VM-s take plenty of shortcuts to improve performance, and don't virtualize some instructions, rather remap them, or "shift rings" of execution etc. as much as possible so to take advantage of the hardware while remaining sandboxed. They don't virtualize the clock as well, so you could time the performance.

    A rootkit isn't competing with other rootkits based on performance, it does so based on how undetectable it is. It's arguably a different problem. I think we're yet to witness what a full blown VM made to be a rootkit will act like, and whether it'll be detectable.
    • Re:I read the paper (Score:4, Interesting)

      by ihavnoid ( 749312 ) on Tuesday October 02, 2007 @02:12AM (#20820323)
      The problem is, that if the VM writer tries to take every possible method to make the execution time similar (e.g. make privileged instructions run as fast as non-privileged instructions), it has to slow the faster ones down. Suddenly, even your grandpa will notice something is wrong. The most insane method would be a VM based on a full-blown, cycle-accurate simulator, but that will be horribly slow.

      Instead, what I think is it's not *impossible* to detect, but it's *difficult* to detect, because the VM detector is going to need a very very very long checklist to determine whether it is running on a VM or not. To be sure, it must check every possible privilegd instruction's timing, check the system memory's contents using various workarounds (such as DMA), and etc. etc.
      • Re:I read the paper (Score:4, Interesting)

        by suv4x4 ( 956391 ) on Tuesday October 02, 2007 @02:22AM (#20820367)
        The problem is, that if the VM writer tries to take every possible method to make the execution time similar (e.g. make privileged instructions run as fast as non-privileged instructions), it has to slow the faster ones down. Suddenly, even your grandpa will notice something is wrong. The most insane method would be a VM based on a full-blown, cycle-accurate simulator, but that will be horribly slow.

        Two things:

        1. You assume the clock isn't manipulated, hence fast commands should be slowed down to match virtualized instructions. Instead the direct instructions may be left running, and the virtualized to skew the clock subtly enough to be undetectable to the naked eye, and match well with the hardware performance to a detector running within.

        2. We're soon about to get plenty of cores on desktop machines, where most of the tasks are serial. If a VM would make use of the extra cores to simulate a single core in around 50-60% its native speed, it may prove undetectable to granda who just browses the net and uses Excel.
        • Re: (Score:3, Insightful)

          by ihavnoid ( 749312 )
          Two things again:
          1. Do you really wish to manipulate the clock for every non-privileged instruction, which will result in a horrible VM performance?

          2. Yes, your grandpa won't notice a 50% slowdown, but your anti-virus software will easily notice. It's either your grandpa doesn't notice and your anti-virus does, or your anti-virus doesn't and your grandpa does (assuming the anti-virus software does a extensive amount of checking)

          What I was trying to say was that it takes a painful amount of performance over
          • Re: (Score:3, Interesting)

            by suv4x4 ( 956391 )
            1. Do you really wish to manipulate the clock for every non-privileged instruction, which will result in a horrible VM performance?

            The huge majority of time the computer is running "userspace" commands. Do the math.

            Yes, your grandpa won't notice a 50% slowdown, but your anti-virus software will easily notice. It's either your grandpa doesn't notice and your anti-virus does, or your anti-virus doesn't and your grandpa does (assuming the anti-virus software does a extensive amount of checking)

            The anti-virus c
            • by cibyr ( 898667 )

              Remember: the hardware configuration the software sees is what the rootkit opts to report.

              This is where this all falls apart. It's pretty trivial to notice if the hardware you're running on has changed, and as mentioned in The Fine Article/Paper, it's bloody impossible to emulate every possible hardware combination out there, or even any of the common ones. I'd love to see an virtual machine that can fool my nVidia driver that it has an 8800 and can run fast enough for even basic desktop usage, let alone any sort of multimedia or gaming. It's just not going to happen.

              • by suv4x4 ( 956391 )
                This is where this all falls apart. It's pretty trivial to notice if the hardware you're running on has changed

                Trivial for who. How often do you scan and compare your list of hardware devices in normal operation. How often does your mom.
                Remember: trojan makers aren't interested in hacking hardened hacker's computer. They're interested in the mythical mom.

                If my antivirus will detect hardware changes then it'll whine on actual hardware changes too. We arrive at the fact that Joanna discovered: it'll be god da
              • by andreyw ( 798182 )
                It doesn't have to, because it doesn't have to be a full fledged multi-VM VMM environment, since the whole point of VM-based rootkits is to move the malevolent code outside of the realm of detectability by the OS. This is why the paper sucks. It makes all these arguments for why existing VMWare, Virtual PC or Parallels solutions are detectable, but that wasn't the point of blue pill.

                All you need is a thin hypervisor layer that provides nearly transparent access to the hardware. As far as the OS is concerned
                • Your statement "All you need is a thin hypervisor layer that provides nearly transparent access to the hardware." is very funny. It's like "All you need [to cure cancer] is a cure for cancer." Your hypervisor has to protect itself by consuming memory. It has to use CPU resources in order to do this. It has to make sure nothing else can rise to it's privilege level while maintaining complete compatibility with everything on the system. Which means that it has to be able to emulate the ability to run a second
            • by tqbf ( 59350 )

              That's simply not how it works. This isn't DOS, and there isn't a simple BIOS call the OS uses to retrieve the current time. Start here: the X86 has a 64 bit timebase register, the TSC, which reports cycle-count time in about 150 cycles directly from the hardware. Joanna tried to virtualize the TSC and found that she couldn't do it reliably under AMD SVM. She had to resort to dynamic code translation, VMware-style, to detect and modify code that probed the TSC. The problem with that approach is left as an e

          • by kesuki ( 321456 )
            name one anti virus maker with a VMM detctor. every AV software i have used simply ignores virtual machines, hidden or otherwise.

            right. a whitepaper about how easy it is to detect virual machine malware is not grandpas av dector popping up a warning about a virtual machine running on his system.

            this type of exploit has been known about for many months virus writers take a lot less time than that to come up with working attack vectors. so it would not suprize me in the least to find that hackers are using l
        • by jmv ( 93421 )
          You forget there's two clocks. There's the machine's clock, which you can easily manipulate, but there's also the soundcard clock -- which you can't manipulate without your stuff sounding strange, and then there's the NTP clock, which you can't manipulate at all. And significant difference between all these clocks and a VM is detected. So basically, slowing down everything equally or skewing the local clock isn't an option.
        • by Cheesey ( 70139 )
          Actually I think you are wrong. It is known to be very difficult to simulate the timing behaviour of a complex CPU as found in modern desktop PCs, because the timing behaviour depends on so many factors. This has previously been a problem for real-time systems because programs on such complex CPUs have very poor timing behaviour in the worst case.

          But it is also a problem if you are trying to hide a virtual machine, because the complexity of operation creates a sort of "timing fingerprint" that is unique to
        • by tqbf ( 59350 )

          "Undetectable to grandad"? Asinine.

          The threat model facing rootkits is not end-user computer savvy. It's conventional anti-malware software. The question isn't whether the person sitting at the computer is smart enough to notice a 60% slowdown. It's whether the impact the rootkit has on the system is reliably measurable, either directly or through a side-channel, in a way that can be harnessed by Norton Antivirus. If it is, you lose; your "undetectable rootkit" is now literally a bullet point on the packag

      • by Sancho ( 17056 )

        Suddenly, even your grandpa will notice something is wrong.
        This is really a straw-man. The point is indetectability from within the guest OS (i.e. antivirus or whatever security software is running should not be able to detect it.) There are plenty of attacks that you can use to detect the infection from the outside.
    • Full vm (Score:3, Insightful)

      by leuk_he ( 194174 )
      The current commercial vm's don't try to be undetectable. But if a vm was created with the purpose of being undetectable might be a different matter.

      It might be possible to create a vm that only visualizes a specific part of a pc. Only hide some memory and disc space, and passing all other parts through to actual hardware. I don't know if it is feasible.

    • You clearly didn't read the paper, because it doesn't simply describe how "industry standard VMs operate". Garfinkel and Ferrie are talking about fundamental X86 architectural issues that make intercepting hardware accesses and emulating them in software perceptable to code running on the same machine. The Blue Pill VMM rootkit doesn't leave important instructions "unvirtualized", but it has to operate within the X86 memory hierarchy, and so remains detectable.

      For example, the fact that a transition in and

    • I'm still convinced that it's possible to make a VM that appaears to software running within as real hardware.

      You mean like the Bochs Pentium Emulator [sf.net]? Because - that is pretty much what they do. They emulate the entire computer, processor included. There was one branch that used a Linux module to use real hardware to speed things up (x86 only), but it otherwise fully emulates the computer including all instructions of the emulated processor and the system timer.

      They've done a great job with it. However

    • by tagbo ( 682269 )
      I promise you that when/if you find VM rootkits operating in the wild, it would already be too late. Maybe the Wachoski's didnt mess up the ending of the Matrix like most people thought...
  • Missing the point (Score:5, Insightful)

    by insecuritiez ( 606865 ) on Tuesday October 02, 2007 @02:09AM (#20820311)
    Unfortunately, this paper completely misses the point. This paper is not so much about detecting a VM based rootkit so much as it is about detecting VMs in general. The authors argue is that if you detect a VM when you aren't expecting to, you've found a rootkit. Joanna's argument is that in a few years, everything is going to be using VM technology and you won't be able to tell a "good" VM from a "bad" one.

    See virtualization-detection-vs-blue-pill [blogspot.com] and her presentation on the subject here [bluepillproject.org]. No one ever said that detecting a virtual machine is impossible. They are saying discriminating between malicious and non-malicious VMs is impossible.
    • Re: (Score:3, Interesting)

      by julesh ( 229690 )
      Joanna's argument is that in a few years, everything is going to be using VM technology and you won't be able to tell a "good" VM from a "bad" one.

      I fail to see what purpose the average user has for VM technology. Sure, it's great for server systems, and as a developer I find it extremely handy, but if all you do with your computer is read e-mail, browse the web and run MS Word, why would you want a VM?
      • by EvanED ( 569694 )
        Depending on how broadly you wish to interpret what a VM is, you could consider stuff like Apple's Rosetta a virtual machine. It's pretty regular that people around here call for MS to use virtualization to provide an avenue for them to ditch a lot of the backwards compatibility cruft that's causing many of their issues.

        These things aren't exactly like running a whole OS in visualization, but some of the same technology is used, and I could see possibilities for using hardware VT support.
        • I thought Rosetta was an emulator?
          • by EvanED ( 569694 )
            What's an emulator and what's a true VM is somewhat blurry. For instance, if my understanding is right, VirtualPC emulates instructions that are executed in ring 0. But most people would still call it a virtual machine monitor.

            There are other things, like the Java Virtual Machine, that are also in some sense an "emulator" -- but it's emulating a machine that runs Java bytecode, so it counts as a virtual machine. Similar for Rosetta.

            If my understanding is right, Rosetta also uses the same dynamic translation
      • Re: (Score:3, Interesting)

        by Ngwenya ( 147097 )

        I fail to see what purpose the average user has for VM technology. Sure, it's great for server systems, and as a developer I find it extremely handy, but if all you do with your computer is read e-mail, browse the web and run MS Word, why would you want a VM?

        Lots of reasons: fault isolation (e.g. jail() on steroids); compatibility isolation (e.g. while most of my system runs the newest version, I keep my old apps running in a VM with an older kernel); hardware interoperability isolation (e.g. this bit of ha

      • by Sique ( 173459 )
        I am currently wondering if I should use VM to make environments for all those remote environments I have to work with. Every one asks for a different toolset, and sometimes they interfere. So having just a VM for each environment with exactly the tools needed would save me much hassle.
      • Microsoft's new research operating system "singularity" http://research.microsoft.com/os/singularity/ [microsoft.com] runs every process in its own virtual machine. This way, if an attacker breaks your email client, it's MUCH more of a pain in the ass to get to the word documents.
        • by julesh ( 229690 )
          Microsoft's new research operating system "singularity" http://research.microsoft.com/os/singularity/ [microsoft.com] runs every process in its own virtual machine.

          No it doesn't, at least not in the way we're talking about here. Processes are run under a modified .NET runtime, so they don't have direct access to hardware, but we're talking about virtualized hardware that looks like the real computer here, which a substantially different affair.
      • by Sloppy ( 14984 )

        if all you do with your computer is read e-mail, browse the web and run MS Word, why would you want a VM?

        Because the software that "average users" run, tends to be written very quickly instead of carefully. You explicitly mentioned MS Word! You just mentioned email and web browsing too, where th most popular applications have repeated histories of bugs that allow them to treat supposedly-harmless data as executable code. Hell yes those should be sandboxed to contain destruction. Maybe VMs aren't the b

    • You could do it in hardware. If hardware lists the hash of the binary of each VM running on the system on a niftly LCD screen, the problem goes away.
    • by tqbf ( 59350 )

      Be fair: the only researcher saying that "hypervisors can be detected, but rootkits can't" is Joanna. The rest of us, from what I can see, agree: you might not be able to detect Blue Pill by name, but you can detect unauthorized virtualization, even if you're already legitimately virtualized. Currently, the only source of unauthorized virtualization? Blue Pill.

    • Also, I think Joanna was really trying to hammer the point that this was an arms race (much like signature detection vs malware at the moment). For every detection technology, there is an evasive technology to get past it, and the same is true in reverse. Detecting the current ways vrootkits are implemented really doesn't get you much in the long run.
    • You have a point, but don't forget that I may choose to be running Red Pill (TM). Red Pill is MY virtualization software, run for MY reasons. As part of its startup, Red Pill does an extensive set of "metal checks" to make sure it's running on real metal, not on some Blue Pill. Lest some Blue Pill do "clock leveling" and make machine performance consistent, but at a lower level, Red Pill has had me input hardware configuration data, so it knows what the clock (and other aspects of the system) really ough
  • by Chas ( 5144 ) on Tuesday October 02, 2007 @02:13AM (#20820327) Homepage Journal
    This is undetectable*!

    That is undetectable*!

    * Undetectability based on current technology and the fact that nothing about a given vector of attack has been defined or studied in depth yet. Claim subject to change once the phenomenon has been studied, quantified, and dissected in a rational, forensic manner.

    Translation: You can't detect it because you aren't looking for it (yet).

    Translation 2: This new attack can't be defeated because nobody's tried yet!

    That's what so many of these "security researchers" and pretty much ALL of the tech-press forgets.

    Like any other system security compromise, the amount of time these things remain "compromising" depends largely on how long it takes to define it.

    • This comment deserves a sole +5 insightful. Seriously. Demote other comments in this story if you have to.
  • Itanium runs x86 instructions through pure software emulation
    Transmeta transcodes source instructions into its native code
    New versions of Intel and AMD processors and motherboards most probably will not have the same instruction timings or emulate undocumented aspects of current hardware and software
    New hardware-based virtualization techniques may not change CPU performance much and can allow guest OS direct access to selected hardware

    The bottom line is that VM detectors can only reliably fingerprint hardwa
    • by cnettel ( 836611 )
      Yeah, therefore the point would be to establish a very detailed baseline for a specific system. That way, you can analyze the exact clock skew between the sound chip and the RTC, timings for specific instructions, etc. Then, it should be possible to detect whether you are suddenly in a VM jail. To detect the jail without ever having seen anything else, that's far harder...
      • by iamacat ( 583406 )
        The problem would be distinguishing between someone installing a hyper-visor and a user buying a new sound card. The later case potentially has larger effect on timing.
  • Amazing how much money your department of civilian oppression can waste on unrelated research. Yes that is right, if you RTFA, the last paragraph discloses their funding from DHS. Their subject is a noble course, but what does it have to do with the terrorists DHS were supposed to find? Or did they broaden their scope to include romanian hackers looking to make a buck?

    Another concern is that this study is presented by those companies that have a stake in spreading positive news about it. And tadaa: the news
  • by Effugas ( 2378 ) * on Tuesday October 02, 2007 @04:31AM (#20820841) Homepage
    And what's Tommy the Tank Engine security?

    "I think it's safe! I think it's safe! I think it's safe!" :)

    Look. Virtualization is not a security technology. I've gotten a VMWare engineer to admit this publicly, on stage, with only mild needling. Virtualization reduces hardware to a protocol that must be parsed, or (as is increasingly common) it allows direct passthrough to devices on buses that have no conception of host vs. guest (see: USB).

    There was actually some really cool work recently done by Jeff Forristal, who pointed out that since all VM's are on the same LAN, all the old LAN-based attacks work really well cross-VM. Oops.

    Now, regarding Joanna's attack, she's completely right that everyone's going to virtualization -- it's just so much more manageable. The consumer market will eventually embrace this.
    • by xarium ( 608956 )
      I think you mean "Pony Engine" security; http://en.wikipedia.org/wiki/The_Little_Engine_That_Could [wikipedia.org] Seriously though, I've read the paper and it's conclusion is fundamentally flawed. The summary is equivalent to; "All current Virtual Machines are detectable; therefore Virtual Machines will always be detectable". That statement is quite plainly wrong. Just as most encryption schemes are broken in theory long before an actual exploit can be constructed, so too a VM can be trivially demonstrated to work in
  • I think the fact that a detection mechanism can be found for each vm rootkit is very plausible. However, won't rootkits always find a way to circomvent the detection mechanisms? In that case, we'll probably end up in a new hacker - security war with hackers tweaking vm's to bypass detection and security folks who keep finding new detection mechanisms. While the article clearly indicates that finding detection mechanisms is much easier than finding ways to bypass or fool the detection mechanism, it doesn't
  • by ajs318 ( 655362 ) <sd_resp2 AT earthshod DOT co DOT uk> on Tuesday October 02, 2007 @06:33AM (#20821299)
    A properly-created virtual machine ought to be absolutely undetectable from withinside. The simple fact is that all commercial offerings to date haven't tried to be undetectable.

    If you lock a person in a windowless room where the only "access to the outside world" is a TV set where you control all the programmes, you essentially control everything they know about the outside world; and you then can make that person believe anything you want them to believe. You could even cause them to think night was day, if their only reference was the continuity announcer's time checks (and/or you could give them a special watch which displayed your manipulated version of the time). But if you accidentally or deliberately let, say, BBC1 get through unaltered, you aren't controlling everything they see; and by comparing the news on the real BBC1 with your altered news on the other stations, they could ascertain that something was amiss.

    If your virtualised environment behaves absolutely "correctly" with respect to undocumented instructions and the like (i.e. they aren't trapped and made to do something specific to your virtualisation application), and all I/O channels are properly manipulated (to the point where even the scan line count on the graphics card is adjusted to account for the slowdown in the virtual environment), then it's undetectable from withinside. If, however, even one undocumented instruction does not behave exactly as the real processor, or even one I/O channel is left unmunged, then there is a potential way the virtual environment could be detected.

    Of course, all that manipulation of stuff is bound to impose some kind of overhead, so a truly undetectable VM might end up being slow as hell ..... but on the inside, you don't know it's slow, precisely because you've been fed misinformation about the time things are taking. And processors are getting faster. They used to think that chop-and-swap analogue TV encryption would never be trivially crackable in practice .....
    • If, however, even one undocumented instruction does not behave exactly as the real processor, or even one I/O channel is left unmunged, then there is a potential way the virtual environment could be detected.

      Malware hosting doesn't have to be perfect and hide its presence in every possible way. It just has to hide in the ways that the market-leading malware detectors use. A malware author can just set up a test system and each time the detector finds a hit, track it down and emulate around it. As you sugges
    • You could even cause them to think night was day, if their only reference was the continuity announcer's time checks (and/or you could give them a special watch which displayed your manipulated version of the time)
      Of course, by the form of your argument you have presented the weakness of your argument. All you need to test the "prisoner hypothesis" is an independent clock. Every processor, every VM, every rootkit is subject to timing tests.
      • by ajs318 ( 655362 )
        No it's not. Remember, you can control how many clock cycles the program on the inside thinks have elapsed. So even if it does manage successfully to ask someone else the time (by some method that would slip past your "blue pencil"), it won't have any reason to doubt the answer that comes back.
        • Yes, but you can compare the local CPU clock with external clocks. If the CPU claims that the timing test you execute took 2 seconds, but 20 seconds have elapsed according to an external clock, then you know something is amiss.

          The external clock doesn't even have to be accessed directly. The testing app could run a test and ask the user if it seemed to take 2 seconds or 20 seconds. I don't think a CPU can skew a human's perception of time...
          • by ajs318 ( 655362 )
            That's the point: you can persuade the program inside the virtual environment that 20 seconds really have elapsed. Because the only way it can find out what its own clock speed is, is to run some sort of timing loop which lasts for a known number of clock cycles; and then check that against some internal clock on the motherboard (using a known method). And the only way it can access that internal clock, is via your virtualisation layer (so you can alter the information in transit). So when it goes off
            • You can persuade the program, but you can't persuade the user. The program just needs to ask the user "did this test take 2 seconds or 20 seconds?" The user will know the difference between a 2s test and a 20s test. Of course, if you ask them to distinguish between 2 seconds and 4 seconds, they might have a harder time. And tests like that obviously don't help at all if you don't have a flesh-and-blood user to ask.
        • I'm starting to believe the many-worlds hypothesis.

          In my universe we have this thing called NTP that does exactly what you claim is impossible. We also have a computer science concept called "looping" that allows us to measure many small-duration events in order to analyze them statistically.
  • by TrumpetPower! ( 190615 ) <ben@trumpetpower.com> on Tuesday October 02, 2007 @08:53AM (#20822331) Homepage

    Folks, this is the Halting Problem [wikipedia.org]. If you have a foolproof method of detecting that you’re running in a VM, you can build a special-purpose VM that watches for that method specifically to defeat it.

    Similarly, you can’t ever rule out the possibility that you yourself are living in a Matrix-style (etc.) simulated world. You might be able to detect that you are under certain circumstances, but any sufficiently advanced simulation is indistinguishable from reality. No, really!

    Oh — and all this applies equally to any supposedly “omnipotent” deities you might care to propose. After all, if “God” could trap “The Devil” (to pick the current favorite pair of arch-rival gods) in a simulated world such that The Devil thought that he (The Devil) was the all-powerful creator of life, the universe, and everything ... then God has no way of knowing that The Devil hasn’t done the same to him. And if God doesn’t have any foolproof way of knowing whether or not The Devil has him trapped, and if he himself has no foolproof way of trapping The Devil, it hardly makes any kind of sense to describe God as “all-powerful,” now, does it?

    Cheers,

    b&

    • or not.

      Are You Living In a Computer Simulation? http://www.simulation-argument.com/ [simulation-argument.com]
    • Practical rebuttal:

      THe halting problem involves metalanguage in a sense. THe real program running ath the top level that analyzes itself, is root. THe program being analyzed is described, thus the root program is the "metaprogram".

      THey invented teh term "metalanguage" to get aroudn this exact fallacy.

      Now, assuming that turing's program can detect, with absolute certainty, whether or not a certain program would halt or not, it would, by necessity, need to be aware of its own decision on whether or not to t
  • but what if you already run your system in a VM? What if a rootkit injects itself as another virtualization layer (at either side of your good VM)? How do you detect this sort of thing?
    • by HTH NE1 ( 675604 )

      All is fine and dandy, but what if you already run your system in a VM? What if a rootkit injects itself as another virtualization layer (at either side of your good VM)? How do you detect this sort of thing?

      Presumably with virtualization detection inside your good VM. Each layer of the onion needs to detect whether it is virtualized and whether or not it is OK with that.

      But really, you only need the layer expecting to run on the hardware to be able to detect anyone virtualizing it instead.

      Unless having the hardware running a trusted VM running an untrusted VM running the applications that are fine with being virtualized by the trusted VM--a sort of VM-in-the-middle using another rootkit exploit on the truste

  • It looks like they're just talking about detecting if you're virtualized or not. So perhaps some of these techniques could be used by user-hostile software publishers (i.e. you're not allowed to run our server in a VM without getting a special (i.e. more expensive) license, or you're not allowed to run our media player unless we know it is directly accessing the display hardare) but I don't see how this gives any rootkit-detection advantages.

    And don't ever forget: from a security standpoint, detecting mal

  • I really wish Slashdot editors would place the (PDF) warning before the link.
  • "That's undetectable!"

    You keep using that word... I do not think it means what you think it means.
  • Why has everyone started hating on kdawson? Is it the 'in' thing?

This is clearly another case of too many mad scientists, and not enough hunchbacks.

Working...