Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Upgrades Operating Systems Software Linux

Virtualization In Linux Kernel 2.6.20 178

mcalwell writes with an article about the Kernel-based Virtual Machine (or KVM for short) in the release candidate Linux 2.6.20 kernel. From the article: "[T]he Linux 2.6.20 kernel will include a full virtualization (not para-virtualization) solution. [KVM] is a GPL software project that has been developed and sponsored by Qumranet. In this article we are offering a brief overview of the KVM for Linux as well as offering up in-house performance numbers as we compare KVM to other virtualization solutions such as QEMU Accelerator and Xen."
This discussion has been archived. No new comments can be posted.

Virtualization In Linux Kernel 2.6.20

Comments Filter:
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Wednesday January 10, 2007 @01:49AM (#17535760)
    Comment removed based on user account deletion
    • Re: (Score:3, Informative)

      by EzInKy ( 115248 )
      I've seen a lot of mentions of file corruption on their mailing list, even with ext3.
      • Re: (Score:3, Informative)

        by Spoke ( 6112 )
        The file corruption talked about has been in the kernel for some time, but recent changes made it more visible and easier to trigger. It should be fixed in the latest 2.6.20rc kernel.

        If you search the kernel archives for ext3 corruption you'll find a couple long threads discussing the issue and the solution.
    • Re: (Score:2, Informative)

      by marol ( 734015 )
      Quoting Torvalds from the 2.6.19 release announcement:
      'So go get it. It's one of those rare "perfect" kernels.'
    • by arivanov ( 12034 ) on Wednesday January 10, 2007 @02:12AM (#17535924) Homepage

      No, the attention has been drawn from people actually giving a fuck.

      Kernels from 2.6.9 onwards are a disaster.

      • PIO IDE causes a deadlock on Via chipsets under heavy IO from 2.6.11 onwards. Worst in 2.6.16, but still reproducible on others.
      • IDE TAPE no longer works from 2.6.10 onwards
      • IDE-SCSI no longer works from 2.6.10 onwards at least up to 2.6.16
      • LONGHAUL is broken to some extent since 2.6.9
      • There is a change in fundamental APIs - termIO (2.6.16), locking (2.6.15), scheduling(every second f*** kernel), etc every release so it is takes a fully blown porting effort and untangling unrelated changes to backport fixes to a driver.

      The original idea was that "distributions will fork off and maintain kernel for releases". This idea has degenerated into "only distributions can fork and maintain a kernel". Sole developers and hobbyists are being treated the same way Microsoft treats them - as a "one night stand". In fact, even distributions are unable to keep up with that. Fedora has half of these bugs in it. So does etch, so does mandriva and all other lesser distributions. Only RHELL and Suse ship something reasonably useable and it is 1 year behind on features.

      Reality is that anything past 2.6.9 should be called 2.7.x and that is it. And it may be seriously worth it to consider Gentoo/BSD or Debian/BSD. While the BSD crowd has its own failings, it does not change fundamental APIs for entertainment purposes every month on the stable branch.

      • Re: (Score:3, Funny)

        by ArsonSmith ( 13997 )
        Way to go Linus. Tell them distros to Fork off!!!
      • by Builder ( 103701 ) on Wednesday January 10, 2007 @04:27AM (#17536622)
        I feel your pain, deeply! A stable API / ABI is absolutely vital for ISV support and the new development model means that you can only get this if you're prepared to pay a large amount of money for your distribution. I don't want to have to pay $1500 for RHEL, but that's the only way I can run an Oracle dev server on a quad box with 16GB ram. The amusing thing is that RHEL is the ONLY piece of software I have to pay for on that machine - our site license gives us free licenses for dev and DR :)

        Anyone other than SLES or RHEL is a second class Linux citizen today. Without vendor support you can forget about trying to run a stable Linux kernel anymore. Bring back the old odd / even split!
        • Re: (Score:2, Interesting)

          by Anonymous Coward

          Just use Solaris. You get to run all the Lunix source and binaries and all the Solaris ones too, the ABI is stable over many years and it has many more useful feaures than Lunix. Also the virtualisation stuff has been in Solaris a lot longer. Oh, and it handles SMP and NUMA better, and it has ZFS.

        • by Kjella ( 173770 ) on Wednesday January 10, 2007 @05:08AM (#17536864) Homepage
          Anyone other than SLES or RHEL is a second class Linux citizen today. Without vendor support you can forget about trying to run a stable Linux kernel anymore. Bring back the old odd / even split!

          Well, first off there's CentOS if you don't need the support. Secondly, while the kernel guys are happy hacking away at 2.6.x, there are other distributions like Debian and Ubuntu LTS which will support a stable API/ABI for several years.

          Yes, now 2.6 keeps breaking but does anyone remember the bad old days when distros were backporting hundreds of patches from 2.5 to 2.4? What the distros are shipping now is closer to a vanilla kernel, for better and for worse. They pick one version, stay with it and stabilize it. That'll what SLES, RHEL and all the other distros do.
          • by Builder ( 103701 )
            I agree on CentOS - I should have mentioned that, my bad.

            With that said, what is the cost of these distros providing long term support? Firstly, there is more and more divergence between the distros over time. The patches that each comes up with the backport specific security features will be different, if only slightly. The patches that each comes up with to backport a highly requested feature will be slightly different. Over time these slight differences will add up to become real differences between the
            • by gmack ( 197796 ) <<gmack> <at> <innerfire.net>> on Wednesday January 10, 2007 @08:49AM (#17538408) Homepage Journal

              The patches that each comes up with the backport specific security features will be different, if only slightly. The patches that each comes up with to backport a highly requested feature will be slightly different. Over time these slight differences will add up to become real differences between the distros.

              Distros should NEVER backport features. That's the whole point of the new development system. If you want a stable kernel stay with the point release your on and just add the security/stabillity patches. If you want new features use a newer kernel.

              That right there was the exact problem with the old even/odd split. The time between the two ended up being so great that people/vendors would start backporting features and destabilizing the "stable series" kernel.

              Distros forking the kernel has always been an annoyance so it's nothing new either. I've been playing the "wich distro has the drivers I need" game since 2.0.x and it got to the point where I just never use distro kernels anymore I just compile my own and add that to the installer.

              • Distros should NEVER backport features. That's the whole point of the new development system. If you want a stable kernel stay with the point release your on and just add the security/stabillity patches. If you want new features use a newer kernel.

                That's a fine strategy if newer kernels don't break existing features. But they do because testing is a really hard problem so it doesn't get done well enough.

                Some kernel bugs get introduced in x.y.z and don't get resolved until z+4 (or longer!). Depending on th
        • by kv9 ( 697238 )

          I don't want to have to pay $1500 for RHEL, but that's the only way I can run an Oracle dev server on a quad box with 16GB ram.

          couldn't you just download 50CentOS?

          • Re: (Score:3, Funny)

            by TheLink ( 130905 )
            Shush. Let him keep paying for it.

            Then you keep getting free all that work he's paying for :).

            • by Builder ( 103701 )
              To be fair, I'm talking about a commercial situation. Our initial RH purchase was for just over GBP300,000.00. When we give Oracle that much money, they give us free dev and DR licenses. RH won't. And RH are the ones making money off of the 'free' software!

              RH wouldn't even give us a test Satellite license, so we had two choices - fork out ANOTHER 8k or do all upgrades without testing. Obviously option 2 wasn't viable (if for no other reason than audit points) so we shelled the cash.

              We could just use dev mac
        • Re: (Score:3, Informative)

          by thue ( 121682 )
          If people really wanted the old stable versions then they would be using 2.6.16.y, which is still being maintained using the same old stable policies as 2.4

          http://en.wikipedia.org/wiki/Linux_kernel#Versions [wikipedia.org]

          The fact that most people don't seem to run 2.6.16 seems to indicate that people are happy to forgo some stability in exchange for having the new features in the latest 2.6.x kernel available now.
          • by Builder ( 103701 )
            The 2.6.x.y tree is there to solve a completely different problem to what was solved by the 2.even.x and 2.odd.x scheme.
            With 2.6.x.y, only fixes to that kernel are added. No new features are added. Ever.

            With the 2.even.x tree, new features were added, but they were stabilised first. The aim (although not always achieved, see NPTL threads for example) was to NOT break the API / ABI during the life of that kernel series. So if I had a driver or a piece of software that worked on 2.4.1, it should STILL work on
        • by r00t ( 33219 ) on Wednesday January 10, 2007 @10:59AM (#17540320) Journal
          See unistd.h for the stable API. Combined with the SVR4 ELF specification, that gives you a stable ABI. It's been a damn long time since Linux lost an old system call. Old a.out binaries from a dozen years ago still run fine. BTW, outside the kernel even glibc is doing well; the biggest problem has been the C++ library, mainly because the C++ committee kept adding features.

          I think your real complaint is that out-of-tree drivers are unsupported. Tough luck. This will never change. I suggest that you get your drivers into the tree, where other people can review them for bugs (afraid of that? embarrased?) and update them as the rest of the kernel changes.
          • by sgtrock ( 191182 )
            I wish I had mod points for you. :(
          • by Apotsy ( 84148 )
            I suggest that you get your drivers into the tree, where other people can review them for bugs

            You mean where people who don't understand them can make "suggestions". See all those rants by Hans. Whatever else you think of him, he's got a point when it comes to that.

          • by Builder ( 103701 )
            It's not FUD when it's true. If we had a stable API / ABI in the kernel, we wouldn't have drivers limited to a specific version of the kernel.

            Why do you think enterprises still rely so heavily on Sun? It's partly because a driver made for Solaris 8 will still work on Solaris 8 later. You don't get that guarantee with many Linux distros.

            I have no drivers to be afraid or embarrassed about, but if you think that is the only thing stopping companies open sourcing their drivers then you have a lot to learn about
      • And the reasons you cite above are some of them. People think Pat and the team are stuck in the past but he probably has a better handle on how linux kernel dev has gone down the toilet with 2.6 than many people.
      • Re: (Score:3, Informative)

        by gmack ( 197796 )

        IDE-SCSI no longer works from 2.6.10 onwards at least up to 2.6.16

        IDE-SCSI never worked properly. I've had constant problems with it since I started CD burning on Linux. Thankfully it is now obsoleted by the new ATA drivers since the ATA devices just shows up on the system as a SCSI device. If you really need to have SCSI support for IDE devices I highly suggest trying the new drivers.

      • by davek ( 18465 )
        This worries me, a lot. I remember how pissed I was when I first jumped back into Linux a few years ago, and tried to compile a device driver. I quickly realized that EVERYTHING that I had spent months learning back in college about linux devices was now completely bunk. This is open source, isn't it? The whole point is to be able to hack it. You can't hack it if you have to learn an entirely new API every few months.

        Perhaps its time to stop the Linus-worship anyway, and go with the HURD:

        http://www.gnu [gnu.org]
        • Perhaps its time to stop the Linus-worship anyway, and go with the HURD:

          From the page you linked, under "Status of the project":

          GNU Mach 1.3 was released in May 2002, and features advanced boot script support, support for large disks (>= 10GB) and an improved console.

          GNU Mach is used as the default microkernel in the GNU/Hurd system. It is compatible with other popular Mach distributions. The device drivers for block devices and network cards are taken from Linux 2.0.x kernel versions, and so a

          • by davek ( 18465 )
            OK, then what?

            The linux kernel is fast becoming another piece of black-box software. Even if it remains open-source, it certainly isn't free (as in speech) software. I've even read that most Linux kernel developers don't even agree with the basic philosophy of OSS.

            If we can no longer trust Linux, and HURD is practically useless, where will our kernel come from?
        • by vadim_t ( 324782 )

          This is open source, isn't it? The whole point is to be able to hack it.


          Exactly, so people went and did just that.


          You can't hack it if you have to learn an entirely new API every few months.


          Changing the API was what the hacking consisted of. There's only so much you can improve while keeping everything looking the same.
      • Does anybody still have an IDE tape drive that hasn't died of old age? Is is actually big enough to do a backup?

        The IDE-SCSI abomination is a foul and evil hack that should have been removed many years ago. Back in the early days, it was needed for CD burning. Linux no longer requires IDE-SCSI. If the cdrecord author told you otherwise... well, he was lying because he damn well knows this isn't true.

        Your "fundamental APIs" are not APIs at all. They are kernel-internal details. Screwing around with unmaintai
        • Does anybody still have an IDE tape drive that hasn't died of old age? Is is actually big enough to do a backup?
          Apparently you don't know much about the subject [superwarehouse.com].

          If the driver is broken and nobody wants to maintain it, it should be marked obsolete or simply removed.

      • Re: (Score:3, Informative)

        by iabervon ( 1971 )
        IDE-SCSI no longer works from 2.6.10 onwards at least up to 2.6.16

        It's ironic that you mention IDE-SCSI as not working. The latest excitement is that devices that used to be treated as IDE are now being treated as SCSI if you build the appropriate drivers, so people are finding that the drives they thought were "IDE" are actually "ATA", and on /dev/sda now. Not only is the functionality of treating "IDE" devices as "SCSI" still available, it doesn't require a special module, and it's becoming default. They'
    • Re: (Score:3, Informative)

      by jnana ( 519059 )

      I'm not sure in general, but I've been happily using 2.6.19 for a while with no issues.

      As for kvm, I downloaded it about a week ago and manually built and installed it (on 2.6.19), and I've had no trouble with it at all. It was very easy to build and install following the instructions [sourceforge.net], and creating images and installing a new os on them is trivial. I set up a couple of images for experimenting with ubuntu and fedora (my main os is gentoo), and I set up another image on which I installed Plan 9, just to pl

  • by Anonymous Coward
    Cutting right to the chase here, if I have this new kernel, and a CPU that supports it (only the latest generation from Intel and AMD do), I should be able to install Windows XP as a guest OS and run it in a window on my Linux machine? That would be very cool and could really help the adoption of Linux. I know I can do something like this with VMWare right now, but if it's built in to the kernel that would be even better. And yes I would have to buy a new machine with one of these current-generation CPUs
    • by eno2001 ( 527078 ) on Wednesday January 10, 2007 @02:23AM (#17535976) Homepage Journal
      My experience so far...

      After playing around with paravirtualization with Xen for the past two+ years, I finally got the cash in August to buy a cheapo AMD dual-core 64-bit system (~$800 at Best Buy: an HP system with a 4200 and 2 gigs of DDR2 RAM). I've run both Xen and QEMU on it under 64-bit Gentoo Linux. The performance of Windows XP on Xen vs. QEMU is fairly close. I would have to say that it seems to me that where Xen suffers is disk I/O. Anything that's disk intensive seems to eat up the CPU. I suspect this wouldn't be the case on better hardware with a high performance SCSI/RAID system. That should, at least, make things a bit better anyway. But for the time being I'm sticking with Xen since it's just too easy to use. And I am especially interested in the live migration features. As long as you have centralized disk storage, you can move live VMs between physical hosts with less than a second of interruption (ie. your users will never notice). Keep in mind, I'm doing this all at home as I'd really like to collapse many of my machines into one or two boxes and keep everything else as simple X displays where GUIs are needed. I've currently got four VMs running on the box with two of them being fully virtualized (Windows XP SP2 for access DRMed crap and Redhat Linux 7 which still hosts some services I don't want to part with) and the other two being paravirtualized (Domain0 which is just the VM management environment and my Gentoo Asterisk "PBX"). PAravirtualized performance is damn amazing. I think if I used strictly paravirtualized OSes I could probably squeeze out 20 VMs from this guy with decent performance. I actually just added two more gigs to the system tonight, and if I assume 128 megs per virtual machine (I've allocated 512M to the Windows XP VM) I can get up to 32 VMs running simultaneously.

      As far as KVM goes, I've had a good deal of experience with QEMU and it KVM is similar, there are some limitations I hope they will overcome. (For what it's worth, the hardware based virtualization in Xen is also a modified QEMU process called qemu-dm) The main one being PCI device allocation. Xen allows you to partition your PCI devices and assign individual cards to specific VMs. I don't think QEMU does this, and I expect that KVM doesn't either.
      • ...where Xen suffers is disk I/O. Anything that's disk intensive seems to eat up the CPU.

        I don't understand why that would be. A disk is slow - glacial - by processor standards. The disk I/O subsystem should submit a request to the disk, then free up the kernel/system to go off and do other things. "other things" may eventually become "wait around for the disk subsystem", but I thought that would show up as idle time.

      • How?

        Asterisk is a real-time process. It expects to wake up 1000 times per second, exactly, on time every time.

        Surely you're getting massive drop-out problems.
        • by eno2001 ( 527078 )
          Ahhh... you've apparently never experiences paravirtualization. It's nearly as fast as running on the bare metal. I mean in the neighborhood of 95-98% of the real system's performance. We're not talking VMWare Workstation or Virtual PC/Server here. This is the real deal... If I were running the PBX in Xen using full virtualization, I'd probably see what you're talking about. But not with paravirtualization. It really is an amazing thing. The reputation that virtualization seems to have of being "slo
    • The article says it'll support x86 versions of Windows as guests, but not the 64-bit versions.
  • Acronym overload (Score:5, Insightful)

    by phoebe ( 196531 ) on Wednesday January 10, 2007 @01:52AM (#17535788)
    Couldn't they just try to use a different acronym, how about KbVM?
    • Re: (Score:2, Funny)

      Let me get this straight... your solution to "acronym overload" is to *add* a character.

      It's opposite day again, isn't it?
    • by repvik ( 96666 )
      No, that'd avoid confusion with Sun's KVM: http://java.sun.com/products/cldc/wp/ [sun.com]
    • Re: (Score:3, Informative)

      by dunstan ( 97493 )
      Strictly it's not an acronym unless it is commonly pronounced as a word.

      NATO is an acronym, KVM isn't.
    • by aug24 ( 38229 )
      +1 Insightful? Were you aiming for +1 funny (Key b oard Video Mouse) and the mods just didn't get it?!

      J.

    • by Rich0 ( 548339 )
      I'd be happy if linux worked right with my mouse when I use my cheap KVM. It works fine with windows, but not with linux. Sure, I understand the issues with mice being stateless and all that, but obviously MS has a fix. On my desk I have one keyboard, one monitor, and two mice as a result. And no, I don't want to go out and spend $300 on a nicer unit...
  • by EvanED ( 569694 )
    Why no comparison against VMWare or native?

    (VMWare I can kind of see, if they were deliberately sticking to all free solutions, but no comparison to running on the host system? That's just bad reporting IMO.)
    • Mod me down! (Score:2, Insightful)

      by EvanED ( 569694 )
      Okay, I read the charts wrong because I'm apparently an idiot. Native times are the first bar in each graph.

      Though VMWare would still have been nice...
      • by Anonymous Coward
        VMWare will perform *much* better on any workload with heavy process thrashing, especially forking (such as the lame compilation or anything that does an autoconf configure and make). This is due to the Intel and AMD virtualization extensions not going far enough to handle unix style OS workloads well (hardware assisted MMU and/or TLB virtualization support is lacking). Context switching takes a heavy toll. Windows doesn't do it so much so it won't suffer as much.

        Also, only AMD's SVM supports full-virtua
      • Re:Mod me down! (Score:4, Interesting)

        by Bert64 ( 520050 ) <.moc.eeznerif.todhsals. .ta. .treb.> on Wednesday January 10, 2007 @05:09AM (#17536866) Homepage
        I heard that the vmware license specifically excludes rights to benchmark it, or at least to publish those benchmarks.
        • Re:Mod me down! (Score:4, Informative)

          by WNight ( 23683 ) on Wednesday January 10, 2007 @08:24AM (#17538060) Homepage
          There's no valid way to enforce post-sale contracts, EULAs aren't valid.
          • There's no valid way to enforce post-sale contracts, EULAs aren't valid.

            I'm afraid you're terribly mistaken. The valid enforcement is called a civil lawsuit and it requires lots of lawyers and money to successfully defend against. Until tort reform makes frivolous lawsuits carry heavy penalties, large companies will continue to bring civil cases against anyone they don't particularly like.

            Also, judges have been more than happy to uphold EULAs as binding contracts in all the cases I know of. The way to
    • Or existing hardware KVMs. I can switch between 8 machines on one KVM and can even chain them together if I need more.
    • by Curtman ( 556920 )
      Why no comparison against VMWare or native?

      I read this [kerneltrap.org] the other day on Kerneltrap (with their new look - love it or hate it) which seems to say that paravirtualization support has been added to KVM. They have several very impressive benchmarks which include native (but not VMWare).
  • Apples to Oranges (Score:4, Interesting)

    by X0563511 ( 793323 ) * on Wednesday January 10, 2007 @02:03AM (#17535880) Homepage Journal
    So... we can compare Xen and KVM to Qemu now? The next time nVidia updates their drivers we should benchmark them against MESA OpenGL...

    Xen amd KVM utilize (require, if I remember correctly) support for virtualization-specific processor instructions. Qemu does not.
    • Comment removed (Score:4, Interesting)

      by account_deleted ( 4530225 ) on Wednesday January 10, 2007 @03:38AM (#17536380)
      Comment removed based on user account deletion
      • by k8to ( 9046 )
        TFA is talking about full virtualization as opposed to paravirtualization. Xen does require virtualization ISA instructions to achieve this, as opposed to VMWare, which achieves it through much trickery. KVM is full-virtualization only, and only runs with these ISA instructions.

        It was only a few pages of text, about 10 paragraphs.
    • Re:Apples to Oranges (Score:5, Informative)

      by repvik ( 96666 ) on Wednesday January 10, 2007 @04:06AM (#17536526)
      Xen requires a P6 or better at this time (available for ~5 years). They hope to add support to ARM and PPC at a later time. KVM, OTOH, depends on brand-spanking new CPUs with virtualization instructions. QEmu just requires some CPU-thingy.
      • Right. You need a modern intel processor (at the moment) to run KVM. That means some Pentium Ds and AFAIK pretty much all Core Duos (and definitely all Core 2 Duos.) AFAIK qvm86, the qemu virtualizer module, still has some serious issues. Those of us with a Core Duo (yay! too bad work OWNS it, but I get to take it home) are pretty stoked about this whole KVM thing.
      • by drew ( 2081 )
        Xen requires a P6 or better at this time (available for ~5 years)


        Try double that.
    • Re: (Score:2, Informative)

      by overbored ( 450853 )
      rtfa. not qemu, but qemu accelerator.

      http://fabrice.bellard.free.fr/qemu/qemu-accel.htm l [bellard.free.fr]
    • Wrong, kqemu does.
    • Even VMWare does not make use of the virtualization-specific processor instructions, because they claim [vmware.com] they don't help:

      32-bit VT works, is not tuned, and won't be officially supported unless it can offer the same performance that users of 32-bit VMs expect. Which probably won't be for another generation or two of VT-like instructions.

      At this point, 32-bit VT is about as useful as support for a 387 math coprocessor on a Pentium - in both cases, the overhead of the support wipes out the gains. 64-bit VT

  • is it no longer required to get full speed out of qemu then?
    • Re:kqemu? (Score:4, Informative)

      by popeydotcom ( 114724 ) on Wednesday January 10, 2007 @03:02AM (#17536202) Homepage
      A I understand it kvm makes use of the VT instructions present in modern CPUs to make QEMU nice and zippy. Older CPUs don't have those instructions so they would still "need" kqemu to make QEMU go full speed.
      • by pembo13 ( 770295 )
        How old a CPU are you talking about? Better yet, got a link?
        • Re: (Score:3, Insightful)

          by popeydotcom ( 114724 )
          On Linux it's easy to tell if you have VT..

          egrep '^flags.*(vmx|svm)' /proc/cpuinfo

          if that returns anything you have VT, if it doesn't, you don't.

          Here's what I get on my desktop (Intel Core 2 Duo).

          alan@wopr:~$ egrep '^flags.*(vmx|svm)' /proc/cpuinfo
          flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr lahf_lm
          flags : fpu vme de pse tsc msr pae mce cx8 api
          • lucky beggar... ;) my best chip is a 2.4GHz Celeron D... I thought it was the bee's knees when I got it and also found I could run that hacked OSX 86 on it... now I'm considering a full upgrade on my box... or else building a better one from scratch. Just as easy to build from scratch and pass the old one down to my grand-daughter to run Edubuntu on... she loves using that when she visits.
  • by Gopal.V ( 532678 ) on Wednesday January 10, 2007 @02:20AM (#17535960) Homepage Journal
    Does the dec 12th story [slashdot.org] make this one a dupe or was just early warning ?
  • Call me when... (Score:2, Insightful)

    they can virtualize XP under linux, can have hardware graphics acceleration, and full dx9+ support.
  • by ens0niq ( 883308 ) on Wednesday January 10, 2007 @03:08AM (#17536212)
    > [T]he Linux 2.6.20 kernel will include a full virtualization (not para-virtualization) solution. Yep. But Molnár Ingo (yes, the hungarian kernel hacker) Ingo Molnar announced [kerneltrap.org] a new patch introducing paravirtualization support for KVM.
  • benchmarks (Score:3, Interesting)

    by Jacek Poplawski ( 223457 ) on Wednesday January 10, 2007 @05:34AM (#17537000)
    Benchmarks in the article shows that it is slower than XEN.
    Do you know why?
    Xen requires some support from virtualized operating system, what about KVM?
    • No, kvm doesn't require the guest to be modded. I have virtual machines that I have been running for ages under qemu (both with the proprietary kqemu module and without). I just started running those same images with kvm, and they Just Work (TM).
    • by vinsci ( 537958 )
      Benchmarks in the article shows that it is slower than XEN. Do you know why?
      The test was done without the new KVM MMU optimizations that were included in Linux 2.6.20-rc4 (the tests in the article were done with Linux 2.6.20-rc3). The new optimizations gives almost 20 time speedup [gmane.org] for context switches, with further optimizations still possible.
  • by piranha(jpl) ( 229201 ) on Wednesday January 10, 2007 @06:31AM (#17537304) Homepage

    Why do they document the model of CD-ROM drive they used, but not the configuration of each emulation/simulation environment? I was shocked by the LAME compile times--and forced to wonder and guess what the filesystem configuration was. Is the filesystem located in an image file on the "host" computer's filesystem? Wouldn't it be interesting to try using a comparible medium across all benchmarks (shared NFS server, or low-level access to the same block device)?

    Not enough data (CPU time vs. real time, etc.), not enough benchmarks (different filesystem media, etc.), poor documentation (configuration, anyone?), on what doesn't even amount to an official release. Correct me if I'm wrong.

  • The real problem is, of course, the braindead x86 ISA that won't support full self-virtualization without special "extensions".

    The 68K family was fully virtualizable back in the late '80s (from the 68020 on).

  • 2.6.20 will be the first real release of KVM. This benchmark used 2.6.20-rc3. For 2.6.20-rc4, a new shadow paging implementation was introduced (memory virtualization) that is significantly faster than what was present in -rc3. I've only got microbenchmarks handy, but context switch time, for instance, improved by about 300%.

    I suspect if they reran their benchmarks with -rc4, the KVM numbers would be much more competitive with the Xen numbers (although I do suspect Xen will still be on top--slightly).
  • How does it compare to KVM?

Genius is ten percent inspiration and fifty percent capital gains.

Working...