Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Operating Systems Software BSD

Virtualization Decreases Security 340

ParaFan writes "In a fascinating story on KernelTrap, Theo de Raadt asserts that while virtualization can increase hardware utilization, it does not in any way improve security. In fact, he contends the exact opposite is true: 'You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes.' de Raadt argues that the lack of support for process isolation on x86 hardware combined with numerous bugs in the architecture are a formula for virtualization decreasing overall security, not increasing it."
This discussion has been archived. No new comments can be posted.

Virtualization Decreases Security

Comments Filter:
  • Uh oh (Score:5, Funny)

    by $RANDOMLUSER ( 804576 ) on Thursday October 25, 2007 @10:55AM (#21114697)

    Theo de Raadt asserts...
    CAUTION: flame war ahead.
    • Re:Uh oh (Score:4, Funny)

      by Anonymous Coward on Thursday October 25, 2007 @11:02AM (#21114805)

      CAUTION: flame war ahead.
      No there isn't! How dare you say that?? F-you! YOU GO TO HELL AND YOU DIE!!!
    • Re: (Score:2, Insightful)

      by Anonymous Coward
      TdR also asserts that SMP makes the system less secure ... and he is right. Most new functionality does.

      It is sad that he doesn't acknowledge that the larger your trusted code base the less secure you are.
    • by Morgaine ( 4316 ) on Thursday October 25, 2007 @02:05PM (#21117581)
      > CAUTION: flame war ahead.

      There doesn't need to be a flame war, because in this particular instance Theo's argument has a gaping hole in it. Consider the following two system architectures:

      1) An ordinary multi-function Unix-type system which also runs a non-trivial component that is exposed to the world (all non-trivial components have bugs, as Theo is right to point out, and hence are attack vectors).

      2) A machine running 2-guest virtualization, in which the non-trivial component runs in one guest, and the rest of the functions run in another.

      Now consider what happens when the world-facing component gets compromised, and by one of many methods (because sysadmins are fallible) the attack gets promoted to root privilege. Security has failed in one guest, but has it failed in the other? Not necessarily, depending on whether the sysadmin has made repeated blunders and not just one. (Eg. a fool might be keeping ssh keys on the public-facing guest ...)

      In this scenario, the isolation created by virtualization has given the syadmin an additional bulkhead against his own fallibility, and that is worthwhile for security, not only for better hardware utilization. The partitioning of the application and O/S space has reduced the cross-section of software open to attack.

      Theo's argument also doesn't bear scrutiny at the hypervisor level, because while an O/S in dom0 is just as fragile as the one in domU that runs an exposed application, the instance in the hypervisor isn't exposed to attack. Theo seems to miss the distinction between endpoint fallibility and fallibility in the conveyance and resourcing that is done by hypervisors. They're different.

      I like Theo's hard stance on security, but on this issue he's handwaving.
      • by Chris Mattern ( 191822 ) on Thursday October 25, 2007 @02:13PM (#21117679)
        I don't think you've entirely grasped Theo's argument. He argues that your reasoning is invalid *because it assumes that the interface between the O/S in dom0 and the hypervisor has no security holes in it*. You don't get to just state that the hypervisor isn't exposed to attack. Now, you can argue that because of its limited nature, and because great pains are taken to avoid unwanted interaction between the hypervisor and the virutal O/S, it is more secure than ordinary software interactions. But I think Theo is not arguing that VMs are less secure than running all your stuff in one O/S. I think he's arguing that VNs are less secure than running your services *on actual separate machines*. And that stands a good chance of being true.

        Chris Mattern
  • Counterargument (Score:3, Insightful)

    by Anonymous Coward on Thursday October 25, 2007 @10:59AM (#21114763)
    Virtualization layers can be much smaller than operating systems. Hypervisors don't have to do as much as a monolithic kernel does, so they're less prone to security holes.
    • I am not sure how this makes sense given that each VM is actually running a monolithic kernel.

      ]{
      • by spun ( 1352 )
        You don't get the concept. It doesn't matter what the VMs are doing. The monolithic kernels can get hacked up down and sideways, and if there are no bugs in the hypervisor, the rest of the machine is still secure.
  • by eldavojohn ( 898314 ) * <eldavojohnNO@SPAMgmail.com> on Thursday October 25, 2007 @10:59AM (#21114765) Journal
    Well, here's his original post : [kerneltrap.org]

    Virtualization seems to have a lot of security benefits.

    You've been smoking something really mind altering, and I think you should share it.

    x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit.

    You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes.

    You've seen something on the shelf, and it has all sorts of pretty colours, and you've bought it.

    That's all x86 virtualization is.
    It's highly probable that Theo is right. After reading the above post, it's highly probable he is a very abrasive and one sided individual. But this is a tech forum so I won't get into judging character.

    However, his technical argument is ... somewhat unsound in my humble opinion. He seems to follow the train of thought that 1) people are, by nature, erroneous coders 2) virtualization means more code therefore 3) virtualization has more errors.

    I'm going to point out some other things I know about coding. Although more lines of code usually means more bugs, this is not always the case. Correlation does not equal causation. It is correlated but only because the more lines of code, the more probability that more people contributed to the project which means it is highly probable one of them was a bad coder. Also, if you plan things out and follow a rigorous model, it is within your power to make very fully functional, very nice software.

    My second point is a different way of looking at the problem. Let's take the naive approach of assuming a primary job of the operating system is to protect the user (and applications) from completely fouling things up in the hardware & memory realm. So it does an 'ok' job at this but, as Theo noted, some bugs still exist. Let's say it's something really bad like they don't stop programs from altering a very sensitive range of memory that is very vital to the correct execution of the operating system itself. Now, hypothetically, the virtualized layer on top of this would give coders a chance to catch this and correct it and protect the user from bringing down the operating system. In this way of looking at things you have two nets. Alone one lets many things pass through so you double it up and now you're catching more fish.

    But my analogy is probably very flawed and I must confess I have coded neither of these pieces of software so I cannot confirm or deny this. I am quite shocked that Mr. de Raadt would react so abusively to a post where someone was merely saying that they 'appeared' to be receiving some amount of additional security from virtualization.

    As for the very last comment Mr. de Raadt makes, I am confused. My employer uses virtualization on a mass scale to more effectively utilize hardware. I believe it has more uses than just bright shiny colors and wrapping--in fact I am interested in its potentials for hosting web OSs and other neat applications to users. It might not be the future like some people think it is but I think Mr. de Raadt was suffering a moment of frustration or dealing with irritable people when he authored this.

    I do wish he were open to more ideas. The second you start to just outright dismiss all your options because they don't satisfy you on the surface you will find you are left with none and often miss the best.
    • Re: (Score:3, Insightful)

      by kebes ( 861706 )

      It's highly probable that Theo is right. After reading the above post, it's highly probable he is a very abrasive and one sided individual. But this is a tech forum so I won't get into judging character.

      This is off-topic but I'm going to say it anyway. After reading the email exchange I find Theo's style quite bothersome. He's a highly skilled hacker and I don't doubt his technical abilities. However his writing style is terrible for technical discussions:

      1. He uses troll-like sentences that divert away

    • One unplugged from the network, with no applications.

      If you want to do anything in this world, there is risk and generally, the greater the risk the greater any reward. So while he may well be correct, he's totally missed the point of well, everything. Leave him to stew in his paranoid fantasy world. I'm sure the NSA, CIA or whoever will be happy to use his skills.

       
    • by rk ( 6314 )
      That Theo... he's such a smooth talker. I'll bet he's quite a hit with the ladies.
    • by yuna49 ( 905461 )
      Regardless of Theo's language or his comments about the quality of OS engineers, I thought this argument was more compelling:

      x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection.

      Just how nasty is the x86 architecture as a platform for virtualization? Certainly the x86 wasn't originally designed to be an S/370 in a smaller box. I read Theo's post as raising the question of how exposed the hype
    • by Soko ( 17987 )
      It's highly probable that Theo is right.

      Theo is a very smart and resourceful individual - that's shown by his work

      After reading the above post, it's highly probable he is a very abrasive and one sided individual.

      That's an understatement to say the least. It's not "highly probable", it's an absolute certainty.

      But this is a tech forum so I won't get into judging character.

      You must be new here. ;-)

      Anyway, abrasive as he is, when Theo has something to say on security I tend to listen as he has a proven track re
  • Theo thinks so highly of himself, he is just wrong on this one. There is not one recorded/public example of someone breaking out of the isolation of a virtual environment! I dare someone to demonstrate otherwise, and I will eat my words.
    • by mccalli ( 323026 )
      There is not one recorded/public example of someone breaking out of the isolation of a virtual environment! I dare someone to demonstrate otherwise, and I will eat my words.

      VMware Tools seems to do it every day...

      Cheers,
      Ian
      • That's a feature, not a bug. The existence of that hole is well-documented and intentional (even if the details of exploiting it aren't). I haven't used VMware, so I can't comment on whether that feature can be disabled by the user, but it would be trivial for VMware to do so.
    • Re: (Score:3, Informative)

      by LLuthor ( 909583 )
      Lots of hypervizors and VM kernels are vulnerable, and can allow guest OSes to inject code into the host OS.

      See this for just a few examples: http://secunia.com/advisories/26890 [secunia.com]
      I can easily find several implementations that cause DOS and escalation attacks on older versions (these are fixed in the current versions, but you can bet more flaws will be found).

      Regardless of Theo's opinion of himself, he is right in that more complexity means more bugs.
    • by Cheesey ( 70139 )
      Here is one [securitytracker.com].

      Beware of a false sense of security! VMware was not designed to be a security tool.
    • by Krondor ( 306666 )
      There is not one recorded/public example of someone breaking out of the isolation of a virtual environment! I dare someone to demonstrate otherwise, and I will eat my words.

      How do those words taste? [eweek.com].

      From the link: "It could allow a malicious hacker to sidestep the virtual machine and exploit the underlying operating system.".

      Anyway I think that you do make a point. Exploiting the underlying OS isn't as much as exploiting the guest OS in the virtual instance. Interesting stuff like Blue Pill [blogspot.com] (which is hotl
  • by timeOday ( 582209 ) on Thursday October 25, 2007 @11:06AM (#21114895)
    A few years ago, it seemed email worms were constantly ravaging Outlook. That, I noticed. But that seems to have tapered off. Haven't noticed any panicked patching of zlib or ssh or sendmail lately. What is keeping people busy these days? Spyware-infested zombie boxes? Anything else?
  • Risk profiles (Score:5, Insightful)

    by Anonymous Coward on Thursday October 25, 2007 @11:08AM (#21114919)
    Let's consider the following:
    1. Security is improved by minimizing the number of services your software layer exports.
    2. Virtualization has a relatively small, well-defined number of services.
    3. Operating systems do not.
    4. ???

    Virtualization is no doubt a complex problem to get right, but it's only one problem. There is a relatively fixed set of hardware any virtualization system claims to support. A reasonably complete virtualization system can be frozen at some level of functionality. An operating system can not; it must, by nature, constantly evolve to new requirements. Hardware, in contrast, is relatively more stable.

    Operating systems running on virtualized systems also have the advantages of operating systems running any fixed configuration. While not quite as consistent as a completely emulated environment, virtualization gets most of the benefits, under reasonable assumptions.

    So, in short, virtualization has the same sort of benefits microkernels were supposed to provide, albeit with a much more heavyweight solution: smaller core that's easier to secure. Virtualization has been used in the mainframe community for years. Virtualization is an even stronger form of process isolation than what operating systems provide.

    Virtualization is much more costly to run than a standard operating system process. This should be a clue that it probably provides stronger isolation guarantees, even if you don't buy the rest of the argument.

    I think it's a specious argument, as usual, to claim that securing the virtualization layer is no harder or easier than securing an operating system. I think securing the virtualization layer is going to be much easier, because while the problem itself is complex, it's still less complex than a complete operating system is.

    A better argument would have been to point out that guest operating systems running under virtualization are no less vulnerable to being compromised than those running on real hardware. But then that would point the finger at operating system vendors, not virtualization ones.
    • by Aladrin ( 926209 )
      I think you missed the point.

      App A may be secure, but probably is not.
      App B may be secure, but probably is not.

      If you run BOTH, you're even less likely to be secure.

      It's the same here. OS A is insecure. To 'secure' it, you run it under Virtualization, which is itself possibly insecure. You've now got all the insecurity of OS A and all its apps plus the Virtualization to worry about.

      If Virtualization covered up the problems with the OSs it hosts, there would be no issue. It does not.
      • I don't agree. You're assuming that an attacker can exploit bugs in both App A and App B, at will. I don't think that's the case. In order to even touch the OS, you have to first exploit the virtualization framework.

        I run virtualization because it seems stronger to me than not. Without virtualization, in order to root my box, the user needs to follow this order:

        * Exploit a running process to get in
        * Find a hole in any root-permission process to get root privileges

        (Note that these might be the same step in s
  • Useless (Score:4, Interesting)

    by andreyw ( 798182 ) on Thursday October 25, 2007 @11:10AM (#21114947) Homepage
    Theo's side keeps asserting that "x86 virtualization isn't secure", but they seem to be perfectly comfortable at keeping the discussion at the level of a "I'm right, NO I'M RIGHT", without any corroborating statements (Hint: Theo's "I am familiar with x86 and its 'nastiness'" isn't one). What's not secure about SVM? What's not secure about VT-x? Why does Theo think that virtualizatio somehow has to imply legacy PC I/O emulation?

    Ugh.
    • Re:Useless (Score:5, Interesting)

      by Krondor ( 306666 ) on Thursday October 25, 2007 @11:40AM (#21115429) Homepage
      What's not secure about SVM? What's not secure about VT-x?

      VT-x and SVM provide paths for rootkits to integrate and hide. New rootkits like Blue Pill [bluepillproject.org] and Vitriol [theta44.org] utilize SVM and VT-x to virtualize the host platform and remain undetected and immune from removal. They're not widespread, but an attack vector exists, which implies the security concerns over them.

      Makes sense to me.
      • by Ed Avis ( 5917 )
        If you choose not to run a virtualization layer, that doesn't (as far as I know) make you any safer from rootkits like Blue Pill.

        If you run a virtualization layer, that doesn't (as far as I know) make you more vulnerable to rootkits.

        In any case, a rootkit can work only once you've already compromised the system. What happens once the system is compromised is not really relevant; we take it for granted that an attacker with root access can do anything he or she wants. The question of security mostly hinges
    • by tji ( 74570 )
      Yes, precisely. He doesn't say that he has looked into virtualization code and it has X problems. Or, even virtualization is vulnerable to Y attacks. He says "Those of us who have experience with the gory bits of the x86 architecture can clearly say
      that we know what would be involved in virtualizing it, and if it was so simple, we would not still be fixing bugs in the exact same area in our operating system going on 12 years."

      In other words "I've never really studied the problem. But my guess is that
  • by Nova Express ( 100383 ) <lawrenceperson AT gmail DOT com> on Thursday October 25, 2007 @11:12AM (#21114985) Homepage Journal
    The snippet presented seems to suggest that more security holes in virtualization = less secure operating system, or OS(X) + V(X), where OS(X) represents the operating system vulnerabilities and V(X) represents virtualization vulnerabilities.

    However, I see this more as if the virtualization layer actually sits under the OS layer, then the actual security for remote intrusion would be, first, Y/OS(X), THEN Y/V(X), where Y is the number of people with the knowledge to exploit each vulnerability. Thus, someone who wanted to exploit the system would both have to be capable of exploiting an OS vulnerability, and THEN also exploiting a virtualization vulnerability.

    (And we're talking about remote usage, because we all know it's virtually impossible to protect a system from anyone who has direct access to the hardware.)

    I understand that reality may not be quite as tidy, but it still seems like a virtualized system would be much more secure that a non-virtualized system, if only because the increased level of knowledge involved means a smaller number of hackers capable of exploiting both layers. What am I missing?

    • by Cheesey ( 70139 )
      I understand that reality may not be quite as tidy, but it still seems like a virtualized system would be much more secure that a non-virtualized system, if only because the increased level of knowledge involved means a smaller number of hackers capable of exploiting both layers. What am I missing?

      I think you might be assuming that the security provided by the OS and the VM are multiplicative, that the result of having both is much stronger than the sum of the two parts. But that might not be true, because
    • I understand that reality may not be quite as tidy, but it still seems like a virtualized system would be much more secure that a non-virtualized system, if only because the increased level of knowledge involved means a smaller number of hackers capable of exploiting both layers. What am I missing?

      What you are missing is that hackers with different skillsets might talk to each other. This is similar to defeating DRM, another security model. Once it's broken by one person, there's nothing to stop him from sharing it with everyone else.

    • it still seems like a virtualized system would be much more secure that a non-virtualized system, if only because the increased level of knowledge involved means a smaller number of hackers capable of exploiting both layers. What am I missing?

      #1) The simple fact that the alternative to virtualization is separate physical hardware, where NO amount of knowledge can allow you to break-out of the hardware, and gain control of the 10 other systems.

      #2) Reality really isn't nearly that tidy. Anyone who is skilled

  • by VincenzoRomano ( 881055 ) on Thursday October 25, 2007 @11:16AM (#21115063) Homepage Journal
    And as there is no engineer that can develop hardware without security bugs, the only solution is to stay with insecurity!
  • This sounds suspiciously close to his comments about journaling filesystems when asked why OpenBSD didn't support them (which boiled down to "journaling sucks, use softdeps instead"). OpenBSD has native support for exactly zero virtualisation schemes, whereas NetBSD has native Xen support (something Opensolaris is working on -if they don't have it already), FreeBSD and Linux both have support for kqemu and Linux and Windows both have support for VMWare, Virtualbox and kqemu.

    For fuck's sake, OpenBSD can't ev
    • For fuck's sake, OpenBSD can't even offer a modern version of WINE in their ports (the one they offer is from 1999, and is broken to boot)

      The ports tree is 3rd party stuff, not OpenBSD. Why don't YOU contribute instead of whining.

      When a Port Is Lagging Behind the Mainstream Version [openbsd.org]

      "The ports collection is a volunteer project. Sometimes the project simply doesn't have the developer resources to keep everything up-to-date. Developers pretty much pick up what they consider interesting and can test in t

      • by RLiegh ( 247921 ) on Thursday October 25, 2007 @12:01PM (#21115773) Homepage Journal

        For fuck's sake, OpenBSD can't even offer a modern version of WINE in their ports (the one they offer is from 1999, and is broken to boot)

        The ports tree is 3rd party stuff, not OpenBSD. Why don't YOU contribute instead of whining.

        Because I have a wealth of already working solutions to choose from. Why in the fuck would I waste my time contributing to a project full of assholes (read the mailing lists for details) who don't even want to admit there's a problem in the first place -much less fix it?

        Don't know about you, but I've got a life to live; if OpenBSD doesn't feel like offering virtualisation technology (or -in the case of WINE- compatibility functionality) then I'll simply move on and use operating systems which do.

        And if Theo insists on making it sound as if the problem isn't OpenBSD's inability to support virtualisation but virtualisation itself; then I reserve the right to laugh my ass off at his extremely silly pompousness.
    • He has a point inasmuch as adding complexity invariably adds security issues that need addressing. However, auditing the Xen code makes much more sense considering the possible security benefits in the long run than avoiding it altogether. Partitioning of the system's resources is a net win for me.
    • "journaling sucks, use softdeps instead"

      I fail to see why you have a problem with that statement. It's true that softdeps has better performance than journaling, both in theory, and in practice. You need only look at FreeBSD.

      FreeBSD and Linux both have support for kqemu and Linux and Windows both have support for VMWare, Virtualbox and kqemu.

      kqemu is borderline useless on FreeBSD. I don't notice any performance difference with or without it. So OpenBSD is on pretty much equal footing. Not to mention th

  • credibility? (Score:4, Insightful)

    by Known Nutter ( 988758 ) on Thursday October 25, 2007 @11:22AM (#21115153)
    Theo's childish, condescending and pointless choice of language seems to undermine his credibility. Although he may be an authority on the subject, I think he owes it to himself - as well as the rest of the community he helped to create - to communicate in a more professional, civilized and respectful manner.

    He's in the same bucket as Dvorak - who wants to listen to the little twerp?

  • I do a lot of prototyping and testing out of scenarios with virtual machines. (40+ iterations for servers and client) Not all are complete builds as I do a lot of cloning. If you fire up a virtual machine that hasn't been in use for a while, you may need to spend time with security updates. Also if you didn't place or adequately configure virus protection and a firewall in an original clone you may end up with a number of machines with poor security. On the other hand cleaning up viruses is easy with my s
  • I am not sure that the argument is right. Saying that virtualizion add possible security bugs is like saying that adding a personal firewall is adding fucntionaltity and thus possible exploits.

    Virtualiztaion is more secure IMHO than current process isolation in most operating systems, but both can fail.

    Theo's argument about security just proves the argument of linux about Security is "people wanking around with their opinions" is not unrealstic.

    You have torealize that the alternative to virtualisation is ge
    • 1. I'd argue that a personal firewall does make a system more vulnerable in many cases, not because it's own vulnerabilities (usually few, if any), but because it makes it's user feel (more) secure. Similar to always sporting a kevlar vest, it'll certainly help in some situations, but you might forget that you still are vulnerable and, after needlessy walking through the worst part of some city, end up in an armed robbery with a gun pointed at your (un-kevlar'd) head.

      Additional to that, virtualization do
  • I have no idea with Theo's point here is. His statement is like saying, "Firewalls are programmed by the same people who write operating systems. If you think Firewalls have no security issues, then you're deluded, if not stupid." Therefore, Firewalls are useless and just increase complexity.

    Virtualization, from a security standpoint, is just a firewalling method. It increases isolation between instances, and more isolation is ALWAYS good.

    • Virtualization, from a security standpoint, is just a firewalling method. It increases isolation between instances, and more isolation is ALWAYS good.
      More isolation is ALWAYS good? Bah, I think it's a real pain. It makes it much harder for me to write security exploits.
  • I believe that he is working from the unstated assumption that the virtualization host and guest OSen all have approximately equivalent levels of security. If so, then virtualization does just increase the number of holes available for exploit. Rather like the way a RAID system increases the overall chance of a drive failing, because of using more drives. The difference of course, is that the RAID system is able to effectively isolate the failed drive, where a security exploit in one OS can potentially prov

  • Theo tends to be cynical and pessimistic about just about anything that's got to do with security, and he's got good reasons to be... things that people push as security features turn out to be irrelevant or even actually dangerous a large proportion of the time. They're not batting 1.000, or even 0.500, by any means.

    This doesn't mean that OpenBSD won't get some kind of virtualization support, it just means that he's being careful and conservative and letting other people be the pioneers. I think this is a good thing, on balance... you don't want to be pulling arrows out of your back because your secure OS decided to take you through unknown territory.

    Yeh, he's got an emphatic way of putting things. You just gotta deal with it. Several years ago I asked him about stack protection and his response was eerily similar to this. A few years later OpenBSD enabled stack protection by default.

    I think he's got a point, but he's comparing running separate computers to running separate OS instances on the same computer. If that's how you're using VMs, then yes, the resulting system is less secure overall... and for Windows that's often how VMs get used because Windows tends to make it unreasonably hard to run multiple instances of the same application on the same computer. If you're replacing less extreme isolation mechanisms on the same computer with VMs, though, then you're adding an extra layer of defence. Think of it as a hierarchy...

    * Same application instance (eg, web server modules)
    * Separate applications (running multiple instances of apache)
    * User level separation (multiple accounts for the separate instances)
    * File system separation (multiple chrooted instances of apache)
    * OS-level separation (eg, FreeBSD jails and I think Solaris domains)
    * Hardware-assisted software virtualization (VMware, Xen)
    * Hardware virtualization (IBM VM "penguin farms")
    * Separate physical computers

    It might be argued that IBM's virtual machines should be lumped with virtualization, or that separate computers should be split from blades, and things like NAS and SAN complicate things, but you get the idea.

    Theo's looking at the hierarchy starting at the bottom, and seeing a reduction in security. Other people are starting at the top, and seeing an increase in security. Both sides are correct, it depends on where you start.
  • by cpm99352 ( 939350 ) on Thursday October 25, 2007 @11:52AM (#21115623)
    As another person pointed out on the OpenBSD list, see http://taviso.decsystem.org/virtsec.pdf [decsystem.org] for Tavis Ormandy's analysis of various VM's -- attack methods were exploiting bugs in the x86 architecture as well as invalid I/O device communication.
  • by Schraegstrichpunkt ( 931443 ) on Thursday October 25, 2007 @12:07PM (#21115849) Homepage

    Theo de Raadt argues that it's more secure to put applications on separate machines than to consolidate them into a single machine.

    L. V. Lammert very inarticulately argues that having a VM provides more security, because otherwise, you're not going to put applications on separate machines, because it's too expensive.

  • by kscguru ( 551278 ) on Thursday October 25, 2007 @12:08PM (#21115865)
    Virtualization is NOT intended for security. Period, end of story, full stop. Security is a secondary effect from having smaller interfaces, and you DO get smaller interfaces with inherent security advantages.

    Here's the first truth of security: your ability to secure a system is INVERSELY PROPORTIONAL to the size of the interface to that system. Every interface point is a potential attack vector, whether direct (an attacker can exploit the interface) or indirect (something outside your control is loaded at interface A, then an attacker at interface B causes A to exploit something). Most security products try to reduce the size of interfaces (e.g. a firewall limits the number of open ports, then further excludes some types of traffic from those ports).

    Look at a general purpose operating system kernel. There are hundreds of system calls (direct attack vectors), hundreds more driver interfaces (indirect attack vectors - driver interfaces are privileged and thus drivers must be bug-free), a few thousand more configuration points (Windows Registery, Linux /sys and /proc trees). Add the libraries that make up the rest of the operating system, and the number of APIs has exploded to thousands, if not tens of thousands.

    Now look at a hypervisor stack. The hypervisor::guest interface is the CPU instruction set (extremely well documented and easy to programatically verify, especially when 99% of instructions can be verified to have no side effects!). Much narrower interface than a general-purpose API. The driver::hypervisor interface is narrower too, since the hypervisor only uses a lower-level interface (e.g. Xen's block device interface, VMware's SCSI interface) that happens to be simpler and better documented. Configuration API is smaller, since it only needs to manage virtual machines, not every possible combination of user-level program and device.

    It's the old microkernel / monolithic kernel debate all over again, where a hypervisor is a microkernel and a general-purpose OS is a monolithic kernel, and the performance loss is small enough that companies are using it in production today. Microkernel have advantages in being easier to secure, more robust in the face of bugs ... monolithic kernels are faster. Is the smaller API (and increased security) worth the loss in performance?

    Here's some security thoughts, based on actual experience with virtualization bugs.

    • The largest number of security vulnerabilities appear in user-level servers, e.g. NAT, DHCP, ssh, apache. Potential commands to these services are unlimited (wide API), often involve text-parsing in C code (inherently difficult), and so they are hardest to secure.
    • A moderate number of security vulnerabilities appear in paravirtualized device emulations. Paravirtual devices are less well specified (medium-width API), so the implementations have more bugs, often from ways the paravirtual device spec doesn't anticipate. (Yes, hardware folks write more complete specs than software folks).
    • Extremely few security vulnerabilities appear in the hypervisor::guest boundary. This boundary is extremely well specified via data sheets and architectural manuals. (narrow API).
  • When did segment registers and page tables quit working? I guess I must have missed that errata!
  • by Chris Snook ( 872473 ) on Thursday October 25, 2007 @12:38PM (#21116353)
    [disclosure text="I work for a company that sells virtualization."]

    Theo's expertise, and indeed that of the entire OpenBSD project, is in the realm of provably correct security. Virtualization adds yet another layer where something can go wrong. Sure there are and will be bugs. We're finding them and fixing them, just as we've always done. From an absolute security standpoint, Theo's right.

    Of course, most businesses couldn't care less. Businesses don't view security as an absolute thing, because human factors make it generally impossible. Businesses view security as a risk, with associated probabilities and costs, worst-case scenarios, likely scenarios, mitigation strategies, and ultimately, diminishing marginal returns. For businesses using virtualization to consolidate systems, it generally reduces risk because it makes it easier to implement policies that mitigate human factors.

    To be precise, virtualization *technology* decreases security, but virtualization *solutions* increase security, at least when done well, which is much more practical than the technical absolute of "done right".

    [/disclosure]
  • by Animats ( 122034 ) on Thursday October 25, 2007 @12:51PM (#21116533) Homepage

    I'm always amazed that virtualization on x86 works at all. The architecture didn't support it, and VMware dealt with that by dynamic code patching. Then Intel and AMD both added some hardware support, but 1) incompatible between Intel and AMD, and 2) single-layer (you can't run a VM on a VM on a VM, unlike IBM mainframes, where that works quite nicely. And then there's the issue that you now have two levels of scheduler, which may or may not play well together.

    But those aren't the real problems. From a security standpoint, a VM running a whole operating system is an overly large security domain. It doesn't even contain, say, exploitable PHP scripts on a server, let alone a rootkit. In fact, almost any exploit that will work on a standalone server will work on the same server inside a VM, and can cause just as much trouble on the net. Now you can have multiple corrupted VM partitions!

    What we're really seeing is that server virtualization is a way to unload security worries from the server farm operator (be it a hosting company or an in-house operation) onto the user. If the server operator just gives the user an empty partition and takes no responsibility for maintaining it, it's cheaper to be a server operator. Server management is easier. But it's no more secure from the standpoint of network, user, or data protection. Too much code in one security box.

  • by QuasiEvil ( 74356 ) on Thursday October 25, 2007 @12:52PM (#21116555)
    Ugh, I hate it when I have to agree with Theo, but in this case, I think he's got a point. Hypervisors will have bugs, and some small portion of those will lead to security holes. I don't care how carefully the code is audited, when you're dealing with handling an architecture this nasty, there will be bugs. Because of the smaller amount of code and services that a hypervisor provides, it should be easier to get all the big bugs out, but I'd be willing to bet that some small ones will hang on, undiscovered, for some time.

    As far as VMs and security, there are two types of security - defense against the malicious, and defense against the retarded. While VMs may not add much in the long term against the malicious (and may even expose more risk), I'd argue right now they're reasonable tools of isolation, until virtualization becomes mainstream and crackers get wise to exploiting the host machine.

    They are effective, from a defense against the malicious standpoint, for isolating old platforms that you continue to need, but can't be exposed to the world. We have a proprietary tool that only runs on NT 4 (we've tried 2k, XP, etc. and no dice) that we absolutely must have at work. NT4 is no longer patched when security issues are found, and there are no longer drivers for the hardware we've had to move it to. So we run it in a VMWare VM that has no network access. Problem solved.

    I'd argue that VMs are very effective on the "defense against the retarded" side. We have a shared departmental webserver here my job. I'm the main admin in my spare time. However, when they merged a bunch of groups, they made us share the box with them. Thus, management mandated they be allowed to change things as root to fit their needs - adding users, resizing LVs, fiddling with apache, etc. They kept fucking up the box and breaking our stuff. So eventually I loaded up Xen and gave them their own VM that they could break in any way they wanted. Tada - their virtual box is always screwed up, mine stays up all the time, and management is happy because we didn't need more hardware.

    Arguably, the above also fits within the greater utilization purpose that I see being the big driver for virtualization. Realistically, most production boxes are far oversized for what they do. If you could stack virtual boxes on real iron to get better asset utilization, that's going to be the driving force for business, as it's a simple, quick way to reduce costs.

A consultant is a person who borrows your watch, tells you what time it is, pockets the watch, and sends you a bill for it.

Working...